W4-W5. Four Fundamental Subspaces, Orthogonal Complements, Least Squares
1. Summary
1.1 Four Fundamental Subspaces of a Matrix
Every matrix defines four important vector subspaces that reveal its complete algebraic structure. Understanding these subspaces is essential for solving systems of equations, understanding matrix transformations, and applications in data science and engineering.
For an \(m \times n\) matrix \(A\) with \(\text{rank}(A) = r\), there are four fundamental subspaces:
1. Column Space \(\mathcal{C}(A)\) (or \(\text{Col}(A)\))
The column space is the set of all possible linear combinations of the columns of \(A\): \[\mathcal{C}(A) = \{A\mathbf{x} \mid \mathbf{x} \in \mathbb{R}^n\}\]
- It is a subspace of \(\mathbb{R}^m\) (the “output space”)
- Its dimension is \(r\) (the rank)
- The system \(A\mathbf{x} = \mathbf{b}\) has a solution if and only if \(\mathbf{b} \in \mathcal{C}(A)\)
- Basis: The pivot columns of the original matrix \(A\) (identified from the row echelon form)
2. Null Space \(\mathcal{N}(A)\) (or \(\text{Nul}(A)\))
The null space is the set of all vectors that \(A\) maps to zero: \[\mathcal{N}(A) = \{\mathbf{x} \in \mathbb{R}^n \mid A\mathbf{x} = \mathbf{0}\}\]
- It is a subspace of \(\mathbb{R}^n\) (the “input space”)
- Its dimension is \(n - r\) (the number of free variables)
- It contains all solutions to the homogeneous system \(A\mathbf{x} = \mathbf{0}\)
- Basis: The special solutions found by setting each free variable to 1 (and others to 0)
3. Row Space \(\mathcal{C}(A^T)\) (or \(\text{Row}(A)\))
The row space is the span of the rows of \(A\), equivalently the column space of \(A^T\): \[\mathcal{C}(A^T) = \{A^T\mathbf{y} \mid \mathbf{y} \in \mathbb{R}^m\}\]
- It is a subspace of \(\mathbb{R}^n\)
- Its dimension is \(r\) (the same as the column space)
- Basis: The nonzero rows of the row echelon form (REF) of \(A\)
- The row space contains all linear combinations of the rows of \(A\)
4. Left Null Space \(\mathcal{N}(A^T)\) (or \(\text{Nul}(A^T)\))
The left null space is the null space of \(A^T\): \[\mathcal{N}(A^T) = \{\mathbf{y} \in \mathbb{R}^m \mid A^T\mathbf{y} = \mathbf{0}\}\]
- It is a subspace of \(\mathbb{R}^m\)
- Its dimension is \(m - r\)
- It is called “left” null space because \(A^T\mathbf{y} = \mathbf{0}\) is equivalent to \(\mathbf{y}^TA = \mathbf{0}^T\) (y multiplies A from the left)
- Basis: Found by solving \(A^T\mathbf{y} = \mathbf{0}\) using row reduction on \(A^T\)
1.1.1 Summary Table
| Subspace | Notation | Lives in | Dimension | How to Find Basis |
|---|---|---|---|---|
| Column Space | \(\mathcal{C}(A)\) | \(\mathbb{R}^m\) | \(r\) | Pivot columns of \(A\) |
| Null Space | \(\mathcal{N}(A)\) | \(\mathbb{R}^n\) | \(n - r\) | Special solutions of \(A\mathbf{x} = \mathbf{0}\) |
| Row Space | \(\mathcal{C}(A^T)\) | \(\mathbb{R}^n\) | \(r\) | Nonzero rows of \(\text{REF}(A)\) |
| Left Null Space | \(\mathcal{N}(A^T)\) | \(\mathbb{R}^m\) | \(m - r\) | Special solutions of \(A^T\mathbf{y} = \mathbf{0}\) |
1.1.2 The Rank-Nullity Theorem
The dimensions of these subspaces are not independent. Two fundamental equations connect them:
\[\text{dim}(\mathcal{C}(A^T)) + \text{dim}(\mathcal{N}(A)) = n\] \[\text{dim}(\mathcal{C}(A)) + \text{dim}(\mathcal{N}(A^T)) = m\]
This simplifies to: \[r + (n - r) = n \quad \text{and} \quad r + (m - r) = m\]
These follow from the Rank-Nullity Theorem: for any matrix, the rank plus the dimension of the null space equals the number of columns.
1.1.3 Computing the Four Subspaces: Step-by-Step
Example Setup: Let \(A = \begin{bmatrix} 1 & 2 & 0 & 1 \\ 2 & 4 & 1 & 4 \\ 1 & 2 & 1 & 3 \end{bmatrix}\) (a \(3 \times 4\) matrix).
Step 1: Row reduce to REF
\[A \xrightarrow{\text{row operations}} R = \begin{bmatrix} 1 & 2 & 0 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix}\]
The pivots are in columns 1 and 3. Thus \(r = 2\).
Step 2: Column Space \(\mathcal{C}(A)\)
The pivot columns in the original matrix \(A\) form a basis: \[\mathcal{C}(A) = \text{span}\left\{\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}\right\}\]
Dimension: \(\text{dim}(\mathcal{C}(A)) = 2\).
Step 3: Row Space \(\mathcal{C}(A^T)\)
The nonzero rows of \(R\) form a basis for the row space: \[\mathcal{C}(A^T) = \text{span}\left\{\begin{bmatrix} 1 \\ 2 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \\ 2 \end{bmatrix}\right\}\]
Dimension: \(\text{dim}(\mathcal{C}(A^T)) = 2\).
Step 4: Null Space \(\mathcal{N}(A)\)
Solve \(A\mathbf{x} = \mathbf{0}\) using \(R\mathbf{x} = \mathbf{0}\): \[\begin{cases} x_1 + 2x_2 + x_4 = 0 \\ x_3 + 2x_4 = 0 \end{cases}\]
Free variables: \(x_2\) and \(x_4\). Setting \(x_2 = 1, x_4 = 0\) gives \(\mathbf{n}_1 = \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix}\). Setting \(x_2 = 0, x_4 = 1\) gives \(\mathbf{n}_2 = \begin{bmatrix} -1 \\ 0 \\ -2 \\ 1 \end{bmatrix}\).
\[\mathcal{N}(A) = \text{span}\left\{\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -1 \\ 0 \\ -2 \\ 1 \end{bmatrix}\right\}\]
Dimension: \(\text{dim}(\mathcal{N}(A)) = 2 = 4 - 2 = n - r\) ✓
Step 5: Left Null Space \(\mathcal{N}(A^T)\)
Solve \(A^T\mathbf{y} = \mathbf{0}\). Form \(A^T\) and row reduce: \[A^T = \begin{bmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 0 & 1 & 1 \\ 1 & 4 & 3 \end{bmatrix} \xrightarrow{\text{row operations}} \begin{bmatrix} 1 & 2 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\]
Free variable: \(y_3 = t\). The system becomes: \[\begin{cases} y_1 + 2y_2 + y_3 = 0 \\ y_2 + y_3 = 0 \end{cases}\]
Setting \(y_3 = 1\): \(y_2 = -1\), \(y_1 = 1\). Thus: \[\mathcal{N}(A^T) = \text{span}\left\{\begin{bmatrix} 1 \\ -1 \\ 1 \end{bmatrix}\right\}\]
Dimension: \(\text{dim}(\mathcal{N}(A^T)) = 1 = 3 - 2 = m - r\) ✓
1.2 Orthogonal Complements
Two subspaces \(V\) and \(W\) of \(\mathbb{R}^n\) are orthogonal if every vector in \(V\) is orthogonal to every vector in \(W\). This is denoted \(V \perp W\).
1.2.1 Definition of Orthogonal Complement
For a subspace \(V \subseteq \mathbb{R}^n\), the orthogonal complement \(V^\perp\) (read “V perp”) is the set of all vectors orthogonal to every vector in \(V\): \[V^\perp = \{\mathbf{x} \in \mathbb{R}^n \mid \mathbf{x} \cdot \mathbf{v} = 0 \text{ for all } \mathbf{v} \in V\}\]
Key fact: \(V^\perp\) is always a subspace. To check if \(\mathbf{x} \in V^\perp\), it suffices to verify that \(\mathbf{x}\) is orthogonal to a basis of \(V\) (not every vector in \(V\)).
1.2.2 The Fundamental Theorem of Linear Algebra
The four fundamental subspaces come in orthogonal pairs:
\[\boxed{\mathcal{C}(A^T) \perp \mathcal{N}(A) \quad \text{in } \mathbb{R}^n}\] \[\boxed{\mathcal{C}(A) \perp \mathcal{N}(A^T) \quad \text{in } \mathbb{R}^m}\]
Equivalently: \[\mathcal{C}(A^T)^\perp = \mathcal{N}(A) \quad \text{and} \quad \mathcal{N}(A)^\perp = \mathcal{C}(A^T)\] \[\mathcal{C}(A)^\perp = \mathcal{N}(A^T) \quad \text{and} \quad \mathcal{N}(A^T)^\perp = \mathcal{C}(A)\]
Why is this true? Consider \(\mathbf{x} \in \mathcal{N}(A)\), so \(A\mathbf{x} = \mathbf{0}\). Let \(\mathbf{r}\) be any row of \(A\). Then: \[\mathbf{r} \cdot \mathbf{x} = 0\]
This says that \(\mathbf{x}\) is orthogonal to every row of \(A\), i.e., \(\mathbf{x} \in \mathcal{C}(A^T)^\perp\). Conversely, if \(\mathbf{x}\) is orthogonal to every row, then \(A\mathbf{x} = \mathbf{0}\).
1.2.3 Orthogonal Decomposition
The orthogonal complement relationship gives us a powerful decomposition: every vector in \(\mathbb{R}^n\) can be uniquely written as the sum of a component in a subspace and a component in its orthogonal complement.
Theorem (Orthogonal Decomposition): If \(V\) is a subspace of \(\mathbb{R}^n\), then every \(\mathbf{x} \in \mathbb{R}^n\) can be uniquely written as: \[\mathbf{x} = \mathbf{v} + \mathbf{w}\] where \(\mathbf{v} \in V\) and \(\mathbf{w} \in V^\perp\).
We write this as: \[\mathbb{R}^n = V \oplus V^\perp\]
The symbol \(\oplus\) denotes direct sum, meaning the spaces only overlap at \(\mathbf{0}\).
For matrices, this gives two fundamental decompositions: \[\mathbb{R}^n = \mathcal{C}(A^T) \oplus \mathcal{N}(A)\] \[\mathbb{R}^m = \mathcal{C}(A) \oplus \mathcal{N}(A^T)\]
1.3 Properties of Orthogonal Complements
1.3.1 Property 1: Intersection is Zero
For any subspace \(V\), we have: \[V \cap V^\perp = \{\mathbf{0}\}\]
Proof: Suppose \(\mathbf{x} \in V \cap V^\perp\). Then \(\mathbf{x} \in V\) and \(\mathbf{x} \in V^\perp\). Since \(\mathbf{x} \in V^\perp\), it is orthogonal to all vectors in \(V\), including itself: \[\mathbf{x} \cdot \mathbf{x} = \|\mathbf{x}\|^2 = 0\]
Therefore \(\mathbf{x} = \mathbf{0}\). \(\square\)
1.3.2 Property 2: Dimensions Add to n
For any subspace \(V \subseteq \mathbb{R}^n\): \[\text{dim}(V) + \text{dim}(V^\perp) = n\]
Proof sketch: Let \(\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}\) be an orthonormal basis for \(V\). Extend this to an orthonormal basis \(\{\mathbf{v}_1, \ldots, \mathbf{v}_k, \mathbf{w}_1, \ldots, \mathbf{w}_{n-k}\}\) for \(\mathbb{R}^n\) (using Gram-Schmidt). The vectors \(\{\mathbf{w}_1, \ldots, \mathbf{w}_{n-k}\}\) are orthogonal to all \(\mathbf{v}_i\), so they form a basis for \(V^\perp\). Thus \(\text{dim}(V^\perp) = n - k\). \(\square\)
1.3.3 Property 3: Double Complement Returns to V
For any subspace \(V\): \[(V^\perp)^\perp = V\]
Proof: We show two inclusions.
(\(V \subseteq (V^\perp)^\perp\)): Let \(\mathbf{v} \in V\). For any \(\mathbf{w} \in V^\perp\), we have \(\mathbf{v} \cdot \mathbf{w} = 0\) by definition of \(V^\perp\). Thus \(\mathbf{v}\) is orthogonal to all vectors in \(V^\perp\), so \(\mathbf{v} \in (V^\perp)^\perp\).
(\(V \supseteq (V^\perp)^\perp\)): By Property 2: \(\text{dim}(V) + \text{dim}(V^\perp) = n\), and \(\text{dim}(V^\perp) + \text{dim}((V^\perp)^\perp) = n\). Thus \(\text{dim}(V) = \text{dim}((V^\perp)^\perp)\). Since \(V \subseteq (V^\perp)^\perp\) and they have equal dimension, they must be equal. \(\square\)
1.3.4 Property 4: Uniqueness of Decomposition
If \(\mathbf{x} = \mathbf{v}_1 + \mathbf{w}_1 = \mathbf{v}_2 + \mathbf{w}_2\) where \(\mathbf{v}_1, \mathbf{v}_2 \in V\) and \(\mathbf{w}_1, \mathbf{w}_2 \in V^\perp\), then \(\mathbf{v}_1 = \mathbf{v}_2\) and \(\mathbf{w}_1 = \mathbf{w}_2\).
Proof: Rearranging: \[\mathbf{v}_1 - \mathbf{v}_2 = \mathbf{w}_2 - \mathbf{w}_1\]
The left side is in \(V\) (subspace closed under subtraction), and the right side is in \(V^\perp\). The only vector in both is \(\mathbf{0}\) (Property 1), so: \[\mathbf{v}_1 - \mathbf{v}_2 = \mathbf{0} \implies \mathbf{v}_1 = \mathbf{v}_2\] \[\mathbf{w}_2 - \mathbf{w}_1 = \mathbf{0} \implies \mathbf{w}_1 = \mathbf{w}_2\] \(\square\)
1.4 Examples of Orthogonal Complements
1.4.1 Geometric Examples in \(\mathbb{R}^3\)
- If \(V\) is the xy-plane (the set \(\{(x, y, 0) \mid x, y \in \mathbb{R}\}\)), then \(V^\perp\) is the z-axis (the set \(\{(0, 0, z) \mid z \in \mathbb{R}\}\)).
- If \(V\) is a line through the origin with direction vector \(\mathbf{v}\), then \(V^\perp\) is the plane perpendicular to \(\mathbf{v}\).
- If \(V = \mathbb{R}^3\), then \(V^\perp = \{\mathbf{0}\}\).
- If \(V = \{\mathbf{0}\}\), then \(V^\perp = \mathbb{R}^3\).
1.4.2 Example in \(\mathbb{R}^4\)
Let \(V = \text{span}\left\{\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix}\right\}\).
To find \(V^\perp\), we need all vectors \(\mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}\) such that: \[\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix} \cdot \mathbf{x} = 0 \quad \text{and} \quad \begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix} \cdot \mathbf{x} = 0\]
This gives: \[x_1 + x_3 = 0 \quad \text{and} \quad x_2 + x_4 = 0\]
Setting \(x_3 = -x_1\) and \(x_4 = -x_2\): \[\mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ -x_1 \\ -x_2 \end{pmatrix} = x_1\begin{pmatrix} 1 \\ 0 \\ -1 \\ 0 \end{pmatrix} + x_2\begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}\]
Thus: \[V^\perp = \text{span}\left\{\begin{pmatrix} 1 \\ 0 \\ -1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}\right\}\]
Note: \(\text{dim}(V) + \text{dim}(V^\perp) = 2 + 2 = 4\) ✓
1.5 Proving the Fundamental Theorem
1.5.1 Proof: \(\mathcal{C}(A^T)^\perp = \mathcal{N}(A)\)
Theorem: For any \(m \times n\) matrix \(A\), \(\mathcal{C}(A^T)^\perp = \mathcal{N}(A)\).
Proof: We show both inclusions.
(\(\mathcal{N}(A) \subseteq \mathcal{C}(A^T)^\perp\)):
Let \(\mathbf{x} \in \mathcal{N}(A)\), so \(A\mathbf{x} = \mathbf{0}\). We must show \(\mathbf{x} \in \mathcal{C}(A^T)^\perp\), i.e., \(\mathbf{x}\) is orthogonal to all vectors in \(\mathcal{C}(A^T)\).
Let \(\mathbf{y} \in \mathcal{C}(A^T)\). Then \(\mathbf{y} = A^T\mathbf{w}\) for some \(\mathbf{w} \in \mathbb{R}^m\). Compute: \[\mathbf{x} \cdot \mathbf{y} = \mathbf{x}^T\mathbf{y} = \mathbf{x}^T(A^T\mathbf{w}) = (A\mathbf{x})^T\mathbf{w} = \mathbf{0}^T\mathbf{w} = 0\]
Thus \(\mathbf{x} \perp \mathbf{y}\) for all \(\mathbf{y} \in \mathcal{C}(A^T)\), so \(\mathbf{x} \in \mathcal{C}(A^T)^\perp\).
(\(\mathcal{C}(A^T)^\perp \subseteq \mathcal{N}(A)\)):
Let \(\mathbf{x} \in \mathcal{C}(A^T)^\perp\), so \(\mathbf{x} \cdot \mathbf{y} = 0\) for all \(\mathbf{y} \in \mathcal{C}(A^T)\). We must show \(A\mathbf{x} = \mathbf{0}\).
In particular, \(\mathbf{x}\) is orthogonal to each row of \(A\) (since the rows of \(A\) are columns of \(A^T\)). Let \(\mathbf{a}_i\) be the \(i\)-th row of \(A\). Then: \[\mathbf{a}_i \cdot \mathbf{x} = 0 \quad \text{for all } i = 1, 2, \ldots, m\]
But this is exactly saying that \(A\mathbf{x} = \mathbf{0}\) (each entry of \(A\mathbf{x}\) is \(\mathbf{a}_i \cdot \mathbf{x}\)). Thus \(\mathbf{x} \in \mathcal{N}(A)\).
Conclusion: \(\mathcal{C}(A^T)^\perp = \mathcal{N}(A)\). \(\square\)
1.5.2 Proof: \(\mathcal{N}(A)^\perp = \mathcal{C}(A^T)\)
This follows immediately from the previous result and Property 3: \[\mathcal{N}(A)^\perp = (\mathcal{C}(A^T)^\perp)^\perp = \mathcal{C}(A^T)\] \(\square\)
Similarly, by applying the same arguments to \(A^T\): \[\mathcal{C}(A)^\perp = \mathcal{N}(A^T) \quad \text{and} \quad \mathcal{N}(A^T)^\perp = \mathcal{C}(A)\]
1.6 Why Rank of A and \(A^T\) are Equal
An important consequence of the Fundamental Theorem is that the row rank equals the column rank.
Theorem: \(\text{rank}(A) = \text{rank}(A^T)\).
Proof: From the Rank-Nullity Theorem applied to \(A\): \[\text{rank}(A) + \text{dim}(\mathcal{N}(A)) = n\]
From the orthogonal complement relationship: \[\text{dim}(\mathcal{C}(A^T)) + \text{dim}(\mathcal{C}(A^T)^\perp) = n\]
But \(\mathcal{C}(A^T)^\perp = \mathcal{N}(A)\), so: \[\text{dim}(\mathcal{C}(A^T)) + \text{dim}(\mathcal{N}(A)) = n\]
Comparing with the first equation: \[\text{rank}(A) = \text{dim}(\mathcal{C}(A)) = \text{dim}(\mathcal{C}(A^T)) = \text{rank}(A^T)\] \(\square\)
This means the number of independent rows equals the number of independent columns!
1.7 Least Squares Problems
When a system \(A\mathbf{x} = \mathbf{b}\) has no exact solution (inconsistent), we often want the “best approximate solution.” This leads to the least squares problem.
1.7.1 The Least Squares Problem Setup
Problem: Given \(A \in \mathbb{R}^{m \times n}\) (typically \(m > n\), more equations than unknowns) and \(\mathbf{b} \in \mathbb{R}^m\), find \(\mathbf{x} \in \mathbb{R}^n\) that minimizes: \[\|A\mathbf{x} - \mathbf{b}\|^2\]
Why “least squares”? We’re minimizing the sum of squared errors: \[\|A\mathbf{x} - \mathbf{b}\|^2 = (A\mathbf{x} - \mathbf{b})^T(A\mathbf{x} - \mathbf{b}) = \sum_{i=1}^m (a_i^T\mathbf{x} - b_i)^2\]
where \(a_i^T\) is the \(i\)-th row of \(A\).
1.7.2 Geometric Interpretation
Since \(A\mathbf{x}\) always lies in the column space \(\mathcal{C}(A)\), we want to find the point in \(\mathcal{C}(A)\) that is closest to \(\mathbf{b}\). This closest point is the orthogonal projection of \(\mathbf{b}\) onto \(\mathcal{C}(A)\), denoted \(\mathbf{p} = \text{proj}_{\mathcal{C}(A)}(\mathbf{b})\).
The error vector \(\mathbf{e} = \mathbf{b} - \mathbf{p}\) is orthogonal to \(\mathcal{C}(A)\), so \(\mathbf{e} \in \mathcal{C}(A)^\perp = \mathcal{N}(A^T)\).
1.7.3 Why Least Squares?
Overdetermined systems (\(m > n\)) often have no exact solution because we have more constraints (equations) than degrees of freedom (unknowns). Common applications:
- Data fitting: Fitting a curve or line to experimental data points
- Regression analysis: Finding the best linear model for data
- Signal processing: Estimating a signal from noisy measurements
- Computer graphics: Approximating shapes with simpler models
- Machine learning: Training models where we have many training examples
Instead of giving up, we find the \(\mathbf{x}\) that makes \(A\mathbf{x}\) as close to \(\mathbf{b}\) as possible.
1.7.4 The Normal Equations (Preview)
The least squares solution \(\hat{\mathbf{x}}\) satisfies: \[A^TA\hat{\mathbf{x}} = A^T\mathbf{b}\]
These are called the normal equations. This equation says that the error \(\mathbf{b} - A\hat{\mathbf{x}}\) is orthogonal to the column space of \(A\).
Derivation: The error is \(\mathbf{e} = \mathbf{b} - A\hat{\mathbf{x}}\). For this to be the minimum error, \(\mathbf{e}\) must be perpendicular to \(\mathcal{C}(A)\), i.e., \(\mathbf{e} \in \mathcal{N}(A^T)\): \[A^T\mathbf{e} = \mathbf{0}\] \[A^T(\mathbf{b} - A\hat{\mathbf{x}}) = \mathbf{0}\] \[A^T\mathbf{b} - A^TA\hat{\mathbf{x}} = \mathbf{0}\] \[A^TA\hat{\mathbf{x}} = A^T\mathbf{b}\]
If \(A\) has linearly independent columns (full column rank), then \(A^TA\) is invertible, and: \[\hat{\mathbf{x}} = (A^TA)^{-1}A^T\mathbf{b}\]
The matrix \(A^+ = (A^TA)^{-1}A^T\) is called the pseudoinverse (or Moore-Penrose inverse) of \(A\).
2. Definitions
- Column Space (\(\mathcal{C}(A)\) or \(\text{Col}(A)\)): The set of all linear combinations of the columns of \(A\); a subspace of \(\mathbb{R}^m\) with dimension \(r = \text{rank}(A)\).
- Null Space (\(\mathcal{N}(A)\) or \(\text{Nul}(A)\)): The set of all solutions to \(A\mathbf{x} = \mathbf{0}\); a subspace of \(\mathbb{R}^n\) with dimension \(n - r\).
- Row Space (\(\mathcal{C}(A^T)\) or \(\text{Row}(A)\)): The span of the rows of \(A\), equivalently the column space of \(A^T\); a subspace of \(\mathbb{R}^n\) with dimension \(r\).
- Left Null Space (\(\mathcal{N}(A^T)\) or \(\text{Nul}(A^T)\)): The null space of \(A^T\); a subspace of \(\mathbb{R}^m\) with dimension \(m - r\).
- Rank: The dimension of the column space (or equivalently, row space) of a matrix.
- Orthogonal Complement (\(V^\perp\)): For a subspace \(V \subseteq \mathbb{R}^n\), the set of all vectors in \(\mathbb{R}^n\) orthogonal to every vector in \(V\).
- Orthogonal Subspaces: Two subspaces \(V\) and \(W\) are orthogonal if every vector in \(V\) is orthogonal to every vector in \(W\).
- Direct Sum (\(V \oplus W\)): The space of all vectors of the form \(\mathbf{v} + \mathbf{w}\) where \(\mathbf{v} \in V\) and \(\mathbf{w} \in W\), when \(V \cap W = \{\mathbf{0}\}\).
- Least Squares Problem: Finding the vector \(\mathbf{x}\) that minimizes \(\|A\mathbf{x} - \mathbf{b}\|^2\).
- Normal Equations: The system \(A^TA\mathbf{x} = A^T\mathbf{b}\) whose solution gives the least squares solution.
- Pseudoinverse: For a matrix \(A\) with full column rank, \(A^+ = (A^TA)^{-1}A^T\) is the left inverse that computes the least squares solution.
- Pivot Columns: Columns of the original matrix \(A\) that contain pivots in its row echelon form; they form a basis for \(\mathcal{C}(A)\).
- Free Variables: Variables corresponding to non-pivot columns; they parametrize the null space.
- Leading Principal Minor: The determinant of the top-left \(k \times k\) submatrix of a matrix (used in LDL^T decomposition).
3. Formulas
- Four Fundamental Subspace Dimensions: For an \(m \times n\) matrix \(A\) with rank \(r\):
- \(\text{dim}(\mathcal{C}(A)) = r\)
- \(\text{dim}(\mathcal{N}(A)) = n - r\)
- \(\text{dim}(\mathcal{C}(A^T)) = r\)
- \(\text{dim}(\mathcal{N}(A^T)) = m - r\)
- Rank-Nullity Theorem: \(\text{rank}(A) + \text{dim}(\mathcal{N}(A)) = n\) and \(\text{rank}(A) + \text{dim}(\mathcal{N}(A^T)) = m\)
- Orthogonal Complement Relationships:
- \(\mathcal{C}(A^T)^\perp = \mathcal{N}(A)\) and \(\mathcal{N}(A)^\perp = \mathcal{C}(A^T)\)
- \(\mathcal{C}(A)^\perp = \mathcal{N}(A^T)\) and \(\mathcal{N}(A^T)^\perp = \mathcal{C}(A)\)
- Dimension of Orthogonal Complement: \(\text{dim}(V) + \text{dim}(V^\perp) = n\) for any subspace \(V \subseteq \mathbb{R}^n\)
- Direct Sum Decompositions:
- \(\mathbb{R}^n = \mathcal{C}(A^T) \oplus \mathcal{N}(A)\)
- \(\mathbb{R}^m = \mathcal{C}(A) \oplus \mathcal{N}(A^T)\)
- Normal Equations: \(A^TA\hat{\mathbf{x}} = A^T\mathbf{b}\)
- Least Squares Solution (when \(A\) has full column rank): \(\hat{\mathbf{x}} = (A^TA)^{-1}A^T\mathbf{b}\)
- Projection onto Column Space: \(\mathbf{p} = A(A^TA)^{-1}A^T\mathbf{b}\)
- Projection Matrix: \(P = A(A^TA)^{-1}A^T\) projects onto \(\mathcal{C}(A)\)
- LDL^T Decomposition (for symmetric \(A\)): \(A = LDL^T\) where \(L\) is unit lower triangular and \(D\) is diagonal
4. Examples
4.1. Four Fundamental Subspaces of a Matrix (Lab 4, Task 1)
Given the matrix
\[A = \begin{pmatrix} 1 & 2 & 0 & 1 \\ 2 & 4 & 0 & 2 \\ 0 & 1 & 1 & 1 \end{pmatrix}\]
Compute an echelon form of \(A\) and its rank \(r\).
Find a basis for \(\text{Col}(A)\) and for \(\text{Row}(A)\).
Find a basis for \(\text{Nul}(A)\) and for \(\text{Nul}(A^T)\).
State the dimensions of the four fundamental subspaces \(\text{Col}(A)\), \(\text{Row}(A)\), \(\text{Nul}(A)\), and \(\text{Nul}(A^T)\), and verify that they satisfy \[\dim \text{Col}(A) = \dim \text{Row}(A) = r, \quad \dim \text{Nul}(A) = n - r, \quad \dim \text{Nul}(A^T) = m - r,\] where \(m\) is the number of rows of \(A\) and \(n\) is the number of columns.
Click to see the solution
Key Concept: Row reduction reveals the rank and provides bases for all four fundamental subspaces simultaneously.
(a) Echelon form and rank:
Row reduce \(A\):
\[\begin{pmatrix} 1 & 2 & 0 & 1 \\ 2 & 4 & 0 & 2 \\ 0 & 1 & 1 & 1 \end{pmatrix} \xrightarrow{R_2 - 2R_1} \begin{pmatrix} 1 & 2 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 \end{pmatrix} \xrightarrow{R_2 \leftrightarrow R_3} \begin{pmatrix} 1 & 2 & 0 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}\]
The echelon form is \(U = \begin{pmatrix} 1 & 2 & 0 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}\), with pivots in columns 1 and 2. Thus \(r = 2\).
(b) Bases for \(\text{Col}(A)\) and \(\text{Row}(A)\):
- Basis for \(\text{Col}(A)\): The pivot columns of the original matrix \(A\) (columns 1 and 2): \[\text{Col}(A) = \text{span}\left\{\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}, \begin{pmatrix} 2 \\ 4 \\ 1 \end{pmatrix}\right\}\]
- Basis for \(\text{Row}(A)\): The nonzero rows of the echelon form: \[\text{Row}(A) = \text{span}\left\{\begin{pmatrix} 1 \\ 2 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 1 \\ 1 \end{pmatrix}\right\}\]
(c) Bases for \(\text{Nul}(A)\) and \(\text{Nul}(A^T)\):
Basis for \(\text{Nul}(A)\): Solve \(A\mathbf{x} = \mathbf{0}\) using the echelon form. Free variables are \(x_3\) and \(x_4\).
From row 2: \(x_2 + x_3 + x_4 = 0 \Rightarrow x_2 = -x_3 - x_4\).
From row 1: \(x_1 + 2x_2 + x_4 = 0 \Rightarrow x_1 = -2x_2 - x_4 = 2x_3 + 2x_4 - x_4 = 2x_3 + x_4\).
Setting \((x_3, x_4) = (1, 0)\): \(\mathbf{n}_1 = \begin{pmatrix} 2 \\ -1 \\ 1 \\ 0 \end{pmatrix}\). Setting \((x_3, x_4) = (0, 1)\): \(\mathbf{n}_2 = \begin{pmatrix} 1 \\ -1 \\ 0 \\ 1 \end{pmatrix}\).
\[\text{Nul}(A) = \text{span}\left\{\begin{pmatrix} 2 \\ -1 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 \\ -1 \\ 0 \\ 1 \end{pmatrix}\right\}\]
- Basis for \(\text{Nul}(A^T)\): Solve \(A^T\mathbf{y} = \mathbf{0}\). Row reduce \(A^T\):
\[A^T = \begin{pmatrix} 1 & 2 & 0 \\ 2 & 4 & 1 \\ 0 & 0 & 1 \\ 1 & 2 & 1 \end{pmatrix} \xrightarrow{\text{row ops}} \begin{pmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\]
Free variable: \(y_2 = t\). From row 2: \(y_3 = 0\). From row 1: \(y_1 = -2y_2 = -2t\).
\[\text{Nul}(A^T) = \text{span}\left\{\begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix}\right\}\]
(d) Dimension verification:
- \(\dim \text{Col}(A) = 2 = r\) ✓
- \(\dim \text{Row}(A) = 2 = r\) ✓
- \(\dim \text{Nul}(A) = 2 = n - r = 4 - 2\) ✓
- \(\dim \text{Nul}(A^T) = 1 = m - r = 3 - 2\) ✓
Answer: \(r = 2\). The bases are given above; all dimension formulas are satisfied.
4.2. Orthogonal Complement of a Plane (Lab 4, Task 2)
Find the orthogonal complement of the plane spanned by the vectors \((1, 1, 2)\) and \((1, 2, 3)\) by taking these to be the rows of \(A\) and solving \(Ax = 0\). Remember that the complement is a whole line.
Click to see the solution
Key Concept: The orthogonal complement of the row space of \(A\) equals the null space of \(A\) (by the Fundamental Theorem of Linear Algebra). The plane is \(\text{Row}(A)\), so we need \(\text{Nul}(A)\).
Set up the matrix: \[A = \begin{pmatrix} 1 & 1 & 2 \\ 1 & 2 & 3 \end{pmatrix}\]
Row reduce the augmented matrix \([A \mid \mathbf{0}]\): \[\begin{pmatrix} 1 & 1 & 2 \\ 1 & 2 & 3 \end{pmatrix} \xrightarrow{R_2 - R_1} \begin{pmatrix} 1 & 1 & 2 \\ 0 & 1 & 1 \end{pmatrix} \xrightarrow{R_1 - R_2} \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \end{pmatrix}\]
Read off the null space: Free variable \(x_3 = t\).
- \(x_2 + x_3 = 0 \Rightarrow x_2 = -t\)
- \(x_1 + x_3 = 0 \Rightarrow x_1 = -t\)
\[\mathbf{x} = t\begin{pmatrix} -1 \\ -1 \\ 1 \end{pmatrix}\]
Answer: The orthogonal complement of the plane is the line \(\text{span}\left\{\begin{pmatrix} -1 \\ -1 \\ 1 \end{pmatrix}\right\}\) (or equivalently \(\text{span}\left\{\begin{pmatrix} 1 \\ 1 \\ -1 \end{pmatrix}\right\}\)).
4.3. Orthogonal Complement of Row Space and Decomposition (Lab 4, Task 3)
Find a basis for the orthogonal complement of the row space of \[A = \begin{pmatrix} 1 & 0 & 2 \\ 1 & 1 & 4 \end{pmatrix}\] Split \(x = (3, 3, 3)\) into a row space component \(x_r\) and a nullspace component \(x_n\).
Click to see the solution
Key Concept: The orthogonal complement of \(\text{Row}(A)\) is \(\text{Nul}(A)\). To split \(\mathbf{x}\), we express it as a combination of a vector in \(\text{Row}(A)\) and a vector in \(\text{Nul}(A)\).
- Find a basis for \(\text{Nul}(A) = \text{Row}(A)^\perp\):
\[\begin{pmatrix} 1 & 0 & 2 \\ 1 & 1 & 4 \end{pmatrix} \xrightarrow{R_2 - R_1} \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 2 \end{pmatrix}\]
Free variable \(x_3 = t\): \(x_1 = -2t\), \(x_2 = -2t\). Basis: \(\mathbf{n} = \begin{pmatrix} -2 \\ -2 \\ 1 \end{pmatrix}\).
- A basis for \(\text{Row}(A)\) is \(\mathbf{r}_1 = (1, 0, 2)^T\) and \(\mathbf{r}_2 = (0, 1, 2)^T\) (from the RREF rows).
- Decompose \(\mathbf{x} = (3,3,3)^T\): Write \(\mathbf{x} = \alpha\mathbf{r}_1 + \beta\mathbf{r}_2 + \gamma\mathbf{n}\):
\[\begin{pmatrix} 1 & 0 & -2 \\ 0 & 1 & -2 \\ 2 & 2 & 1 \end{pmatrix}\begin{pmatrix}\alpha\\\beta\\\gamma\end{pmatrix} = \begin{pmatrix}3\\3\\3\end{pmatrix}\]
From rows 1 and 2: \(\alpha = 3 + 2\gamma\), \(\beta = 3 + 2\gamma\). Substituting into row 3: \(2(3+2\gamma) + 2(3+2\gamma) + \gamma = 3 \Rightarrow 12 + 9\gamma = 3 \Rightarrow \gamma = -1\). Thus \(\alpha = 1\), \(\beta = 1\).
Row space component: \[\mathbf{x}_r = \alpha\mathbf{r}_1 + \beta\mathbf{r}_2 = 1\cdot(1,0,2)^T + 1\cdot(0,1,2)^T = (1,1,4)^T\]
Null space component: \[\mathbf{x}_n = \gamma\mathbf{n} = -1\cdot(-2,-2,1)^T = (2,2,-1)^T\]
Verify: \(\mathbf{x}_r + \mathbf{x}_n = (1,1,4)^T + (2,2,-1)^T = (3,3,3)^T\) ✓, and \(A\mathbf{x}_n = \mathbf{0}\) ✓.
Answer: Basis for \(\text{Row}(A)^\perp\): \(\left\{\begin{pmatrix}-2\\-2\\1\end{pmatrix}\right\}\). Row space component: \(x_r = (1,1,4)^T\); null space component: \(x_n = (2,2,-1)^T\).
4.4. Why These Orthogonality Statements Are False (Lab 4, Task 4)
Why are these statements false?
If \(V\) is orthogonal to \(W\), then \(V^\perp\) is orthogonal to \(W^\perp\).
If \(V\) is orthogonal to \(W\) and \(W\) is orthogonal to \(Z\), then \(V\) is orthogonal to \(Z\).
Click to see the solution
Key Concept: Orthogonality is not transitive, and taking complements does not preserve orthogonality. Counterexamples in \(\mathbb{R}^3\) disprove both statements.
(a) Counterexample:
Let \(V = \text{span}\{(1,0,0)\}\) and \(W = \text{span}\{(0,1,0)\}\) in \(\mathbb{R}^3\).
Clearly \(V \perp W\) since \((1,0,0)\cdot(0,1,0) = 0\).
Now \(V^\perp = \text{span}\{(0,1,0),(0,0,1)\}\) and \(W^\perp = \text{span}\{(1,0,0),(0,0,1)\}\).
Both complements contain the vector \((0,0,1)\), so \((0,0,1) \in V^\perp\) and \((0,0,1) \in W^\perp\) and \((0,0,1)\cdot(0,0,1) = 1 \neq 0\).
Thus \(V^\perp\) is not orthogonal to \(W^\perp\). The statement is false.
(b) Counterexample:
Let \(V = \text{span}\{(1,0,0)\}\), \(W = \text{span}\{(0,1,0)\}\), \(Z = \text{span}\{(1,0,1)\}\) in \(\mathbb{R}^3\).
- \(V \perp W\): \((1,0,0)\cdot(0,1,0) = 0\) ✓
- \(W \perp Z\): \((0,1,0)\cdot(1,0,1) = 0\) ✓
- But \((1,0,0)\cdot(1,0,1) = 1 \neq 0\), so \(V\) is not orthogonal to \(Z\).
The statement is false because orthogonality is not transitive.
Answer: Both statements are false by the counterexamples above.
4.5. Vectors Orthogonal to the Fundamental Subspaces (Lab 4, Task 5)
Find a vector \(x\) orthogonal to the row space of \(A\), a vector \(y\) orthogonal to the column space, and a vector \(z\) orthogonal to the nullspace.
\[A = \begin{pmatrix} 1 & 2 & 1 \\ 2 & 4 & 3 \\ 3 & 6 & 4 \end{pmatrix}\]
Click to see the solution
Key Concept: A vector orthogonal to the row space is in the null space; a vector orthogonal to the column space is in the left null space; a vector orthogonal to the null space is in the row space (by the Fundamental Theorem).
- Row reduce \(A\): \[\begin{pmatrix}1&2&1\\2&4&3\\3&6&4\end{pmatrix} \xrightarrow{R_2-2R_1,\,R_3-3R_1} \begin{pmatrix}1&2&1\\0&0&1\\0&0&1\end{pmatrix} \xrightarrow{R_3-R_2} \begin{pmatrix}1&2&1\\0&0&1\\0&0&0\end{pmatrix} \xrightarrow{R_1-R_2} \begin{pmatrix}1&2&0\\0&0&1\\0&0&0\end{pmatrix}\]
Pivot columns: 1 and 3. \(r = 2\). Free variable: \(x_2\).
\(\text{Nul}(A)\) (orthogonal to row space): Setting \(x_2 = 1\): \(x_1 = -2\), \(x_3 = 0\). \[\mathbf{x} = \begin{pmatrix}-2\\1\\0\end{pmatrix}\]
\(\text{Nul}(A^T)\) (orthogonal to column space): Solve \(A^T\mathbf{y} = \mathbf{0}\). \[A^T = \begin{pmatrix}1&2&3\\2&4&6\\1&3&4\end{pmatrix} \xrightarrow{R_2-2R_1,\,R_3-R_1} \begin{pmatrix}1&2&3\\0&0&0\\0&1&1\end{pmatrix} \xrightarrow{R_2\leftrightarrow R_3} \begin{pmatrix}1&2&3\\0&1&1\\0&0&0\end{pmatrix}\]
Free variable \(y_3 = t\): \(y_2 = -t\), \(y_1 = -2(-t) - 3t = -t\). \[\mathbf{y} = t\begin{pmatrix}-1\\-1\\1\end{pmatrix}, \quad \text{e.g.} \quad \mathbf{y} = \begin{pmatrix}-1\\-1\\1\end{pmatrix}\]
- Row space (orthogonal to null space): Any nonzero row of the echelon form: \[\mathbf{z} = \begin{pmatrix}1\\2\\0\end{pmatrix} \quad \text{(first nonzero row of RREF)}\]
Answer: \(x = \begin{pmatrix}-2\\1\\0\end{pmatrix}\) (orthogonal to row space), \(y = \begin{pmatrix}-1\\-1\\1\end{pmatrix}\) (orthogonal to column space), \(z = \begin{pmatrix}1\\2\\0\end{pmatrix}\) (orthogonal to null space).
4.6. Block Matrices: Echelon Form and Special Solutions (Lab 4, Task 6)
Find \(U\) (echelon form) for each of these block matrices, and find the special solutions:
\[A = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 3 \\ 2 & 4 & 6 \end{bmatrix} \qquad B = \begin{bmatrix} A & A \end{bmatrix} \qquad C = \begin{bmatrix} A & A \\ A & 0 \end{bmatrix}\]
Click to see the solution
Key Concept: Row operations can be described for block matrices using the same Gaussian elimination. The block structure determines how pivots appear and what the special solutions look like.
Matrix \(A\) (\(3 \times 3\)):
\[A = \begin{bmatrix}0&0&0\\0&0&3\\2&4&6\end{bmatrix} \xrightarrow{R_1\leftrightarrow R_3} \begin{bmatrix}2&4&6\\0&0&3\\0&0&0\end{bmatrix}\]
Echelon form \(U_A = \begin{bmatrix}2&4&6\\0&0&3\\0&0&0\end{bmatrix}\). Pivots in columns 1 and 3; free variable: \(x_2\).
- From row 2: \(3x_3 = 0 \Rightarrow x_3 = 0\).
- From row 1: \(2x_1 + 4x_2 + 0 = 0 \Rightarrow x_1 = -2x_2\).
- Setting \(x_2 = 1\): special solution \(\mathbf{s}_A = \begin{pmatrix}-2\\1\\0\end{pmatrix}\).
Matrix \(B = [A \mid A]\) (\(3 \times 6\)):
Row reducing \(B\) is the same as row reducing \(A\), applied to both blocks:
\[U_B = \begin{bmatrix}2&4&6&2&4&6\\0&0&3&0&0&3\\0&0&0&0&0&0\end{bmatrix}\]
Free variables: \(x_2, x_4, x_5, x_6\) (columns 2, 4, 5, 6 are free). The RREF gives \(x_1 = -2x_2 - x_4 + x_6 \cdot(\ldots)\)… More precisely: from the RREF of \(B\), pivots are in columns 1 and 3, so free columns are 2, 4, 5, 6. Setting each free variable to 1 in turn yields 4 special solutions.
Matrix \(C\) (\(6 \times 6\)):
\[C = \begin{bmatrix}A&A\\A&0\end{bmatrix}\]
Subtract top block from bottom block (\(R_{\text{bottom}} - R_{\text{top}}\)):
\[\begin{bmatrix}A&A\\0&-A\end{bmatrix} \sim \begin{bmatrix}U_A&U_A\\0&-U_A\end{bmatrix}\]
Echelon form of \(C\) has the same structure, with rank \(= 2 \cdot \text{rank}(A) = 4\). The null space has dimension \(6 - 4 = 2\).
Answer: \(U_A = \begin{bmatrix}2&4&6\\0&0&3\\0&0&0\end{bmatrix}\) with special solution \(\mathbf{s} = (-2,1,0)^T\) for \(A\). For \(B\) (rank 2), there are 4 special solutions. For \(C\) (rank 4), there are 2 special solutions.
4.7. Construct a System from a Complete Solution (Lab 4, Task 7)
I. Find a \(2 \times 3\) system \(Ax = b\) whose complete solution is \[x = \begin{bmatrix}1\\2\\0\end{bmatrix} + w\begin{bmatrix}1\\3\\1\end{bmatrix}\]
II. Find a \(3 \times 3\) system with these solutions exactly when \(b_1 + b_2 = b_3\).
Click to see the solution
Key Concept: The complete solution decomposes as a particular solution plus the null space. We reverse-engineer the matrix and right-hand side.
Part I:
Identify the particular solution and null space direction:
- Particular solution: \(\mathbf{x}_p = (1,2,0)^T\)
- Null space direction: \(\mathbf{x}_h = (1,3,1)^T\)
Build \(A\) with \(\text{Nul}(A) = \text{span}\{(1,3,1)^T\}\): Use RREF with free variable \(x_3\): \[A = \begin{bmatrix}1&0&-1\\0&1&-3\end{bmatrix}\] Verify: \(A\mathbf{x}_h = (1-1, 3-3)^T = (0,0)^T\) ✓
Compute \(\mathbf{b} = A\mathbf{x}_p\): \[\mathbf{b} = \begin{bmatrix}1&0&-1\\0&1&-3\end{bmatrix}\begin{bmatrix}1\\2\\0\end{bmatrix} = \begin{bmatrix}1\\2\end{bmatrix}\]
Part II: Add a third row that is the sum of the first two (so consistency requires \(b_1 + b_2 = b_3\)): \[A = \begin{bmatrix}1&0&-1\\0&1&-3\\1&1&-4\end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix}b_1\\b_2\\b_1+b_2\end{bmatrix}\]
Answer:
- Part I: \(\begin{bmatrix}1&0&-1\\0&1&-3\end{bmatrix}x = \begin{bmatrix}1\\2\end{bmatrix}\)
- Part II: \(\begin{bmatrix}1&0&-1\\0&1&-3\\1&1&-4\end{bmatrix}x = \begin{bmatrix}b_1\\b_2\\b_1+b_2\end{bmatrix}\)
4.8. Complete Solutions of Linear Systems (Lab 4, Task 8)
Find the complete solutions of:
\[\begin{aligned}x + 3y + 3z &= 1 \\ 2x + 6y + 9z &= 5 \\ -x - 3y + 3z &= 5\end{aligned}\]
\[\begin{bmatrix}1&3&1&2\\2&6&4&8\\0&0&2&4\end{bmatrix}\begin{bmatrix}x\\y\\z\\t\end{bmatrix} = \begin{bmatrix}1\\3\\1\end{bmatrix}\]
Click to see the solution
Key Concept: The complete solution is the sum of a particular solution and the general homogeneous solution. Row reduce the augmented matrix to identify pivot/free variables.
(a):
Row reduce the augmented matrix: \[\left[\begin{array}{ccc|c}1&3&3&1\\2&6&9&5\\-1&-3&3&5\end{array}\right] \xrightarrow{R_2-2R_1,\,R_3+R_1} \left[\begin{array}{ccc|c}1&3&3&1\\0&0&3&3\\0&0&6&6\end{array}\right] \xrightarrow{R_3-2R_2} \left[\begin{array}{ccc|c}1&3&3&1\\0&0&3&3\\0&0&0&0\end{array}\right]\]
Back-substitute: From row 2: \(z = 1\). From row 1: \(x + 3y = 1 - 3 = -2\).
- Free variable \(y = \alpha\): \(x = -2 - 3\alpha\).
Complete solution: \[\begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}-2\\0\\1\end{pmatrix} + \alpha\begin{pmatrix}-3\\1\\0\end{pmatrix}\]
(b):
- Row reduce: \[\left[\begin{array}{cccc|c}1&3&1&2&1\\2&6&4&8&3\\0&0&2&4&1\end{array}\right] \xrightarrow{R_2-2R_1} \left[\begin{array}{cccc|c}1&3&1&2&1\\0&0&2&4&1\\0&0&2&4&1\end{array}\right] \xrightarrow{R_3-R_2} \left[\begin{array}{cccc|c}1&3&1&2&1\\0&0&2&4&1\\0&0&0&0&0\end{array}\right]\]
\[\xrightarrow{R_1 - \frac{1}{2}R_2} \left[\begin{array}{cccc|c}1&3&0&0&\frac{1}{2}\\0&0&1&2&\frac{1}{2}\\0&0&0&0&0\end{array}\right]\]
- Free variables \(y = \alpha\), \(t = \beta\); pivot variables \(x = \tfrac{1}{2} - 3\alpha\), \(z = \tfrac{1}{2} - 2\beta\).
- Complete solution: \[\begin{pmatrix}x\\y\\z\\t\end{pmatrix} = \begin{pmatrix}1/2\\0\\1/2\\0\end{pmatrix} + \alpha\begin{pmatrix}-3\\1\\0\\0\end{pmatrix} + \beta\begin{pmatrix}0\\0\\-2\\1\end{pmatrix}\]
Answer:
- (a): \(\begin{pmatrix}-2\\0\\1\end{pmatrix} + \alpha\begin{pmatrix}-3\\1\\0\end{pmatrix}\)
- (b): \(\begin{pmatrix}1/2\\0\\1/2\\0\end{pmatrix} + \alpha\begin{pmatrix}-3\\1\\0\\0\end{pmatrix} + \beta\begin{pmatrix}0\\0\\-2\\1\end{pmatrix}\)
4.9. Normal Vector and Orthogonal Complement of a Plane (Assignment 4, Task 1)
Let \(P = \{(x, y, z) \in \mathbb{R}^3 : 2x - 4y - 5z = 0\}\).
Find a nonzero vector \(n \in \mathbb{R}^3\) normal to \(P\) and verify that \(P = n^\perp\).
Find a basis for \(P\).
Find a basis for \(P^\perp\) and relate it to \(\text{span}\{n\}\).
Click to see the solution
Key Concept: A plane through the origin in \(\mathbb{R}^3\) defined by \(ax + by + cz = 0\) has normal vector \((a, b, c)\), and every point in the plane is orthogonal to this normal.
(a) Normal vector:
The plane is \(\{(x,y,z): 2x - 4y - 5z = 0\}\). The coefficients give the normal vector: \[n = \begin{pmatrix}2\\-4\\-5\end{pmatrix}\]
Verification \(P = n^\perp\): A vector \((x,y,z)\) is in \(n^\perp\) iff \(n \cdot (x,y,z) = 2x - 4y - 5z = 0\), which is exactly the defining equation of \(P\). ✓
(b) Basis for \(P\):
We need two linearly independent solutions of \(2x - 4y - 5z = 0\). Free variables: \(y, z\).
- Set \(y = 1, z = 0\): \(x = 2\). Vector: \((2, 1, 0)^T\).
- Set \(y = 0, z = 1\): \(x = 5/2\). Or set \(y = 2, z = 2\): \(x = 4 + 5 = 9\). Simpler: set \(z = 2, y = 0\): \(x = 5\). Vector: \((5, 0, 2)^T\).
Basis for \(P\): \(\left\{\begin{pmatrix}2\\1\\0\end{pmatrix}, \begin{pmatrix}5\\0\\2\end{pmatrix}\right\}\).
(c) Basis for \(P^\perp\):
Since \(P\) has dimension 2 in \(\mathbb{R}^3\), \(P^\perp\) has dimension 1. We just found that \(n = (2,-4,-5)^T\) is orthogonal to \(P\), so: \[P^\perp = \text{span}\{n\} = \text{span}\left\{\begin{pmatrix}2\\-4\\-5\end{pmatrix}\right\}\]
Answer: \(n = (2,-4,-5)^T\); basis for \(P\): \(\{(2,1,0)^T, (5,0,2)^T\}\); basis for \(P^\perp = \text{span}\{n\}\).
4.10. Four Subspaces of a Parametric Matrix (Assignment 4, Task 2)
For \(a, b, c \in \mathbb{R}\), consider \[A = \begin{bmatrix}0&1&a&0&a&0\\0&0&1&b&0&b\\0&0&0&1&c&c\\0&0&0&0&0&0\end{bmatrix}\]
Find bases for \(\text{Col}(A)\), \(\text{Row}(A)\), \(\text{Nul}(A)\), and \(\text{Nul}(A^T)\) in terms of \(a\), \(b\), \(c\).
State explicitly the two orthogonality relations among the four subspaces.
Determine whether the dimensions of the four subspaces change if \(a = b = c = 0\).
Click to see the solution
Key Concept: The matrix \(A\) is already in row echelon form. We can read off pivots and free variables directly.
(a) Identifying subspaces:
The matrix \(A\) is \(4 \times 6\) and already in REF. Pivots are in columns 2, 3, and 4. Thus \(r = 3\).
\(\text{Col}(A)\): Pivot columns of the original matrix: columns 2, 3, 4. \[\text{Col}(A) = \text{span}\left\{\begin{pmatrix}1\\0\\0\\0\end{pmatrix}, \begin{pmatrix}a\\1\\0\\0\end{pmatrix}, \begin{pmatrix}0\\b\\1\\0\end{pmatrix}\right\}\]
\(\text{Row}(A)\): Nonzero rows of the REF (which is \(A\) itself): \[\text{Row}(A) = \text{span}\left\{(0,1,a,0,a,0),\ (0,0,1,b,0,b),\ (0,0,0,1,c,c)\right\}\]
\(\text{Nul}(A)\): Free variables are \(x_1, x_5, x_6\) (columns 1, 5, 6). Set each to 1 in turn and solve back. This gives 3 special solutions.
\(\text{Nul}(A^T)\): \(\dim \text{Nul}(A^T) = m - r = 4 - 3 = 1\). Since the last row of \(A\) is zero, \(e_4 = (0,0,0,1)^T\) satisfies \(A^T e_4 = 0\): \[\text{Nul}(A^T) = \text{span}\left\{\begin{pmatrix}0\\0\\0\\1\end{pmatrix}\right\}\]
(b) Orthogonality relations:
\[\text{Row}(A) \perp \text{Nul}(A) \quad \text{in } \mathbb{R}^6\] \[\text{Col}(A) \perp \text{Nul}(A^T) \quad \text{in } \mathbb{R}^4\]
(c) Effect of \(a = b = c = 0\):
If \(a = b = c = 0\), the matrix becomes block diagonal with the same pivot structure. The rank remains \(r = 3\), so all four subspace dimensions are unchanged: \(\dim\text{Col} = 3\), \(\dim\text{Row} = 3\), \(\dim\text{Nul} = 3\), \(\dim\text{Nul}(A^T) = 1\).
Answer: \(r = 3\) regardless of \(a, b, c\). Dimensions are \(3, 3, 3, 1\) for Col, Row, Nul, Nul(\(A^T\)) respectively. Setting \(a = b = c = 0\) does not change any dimensions.
4.11. Four Subspaces of \(A\), \(A+I\), and \(A+A^2\) (Assignment 4, Task 3)
Let \[A = \begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}\] Describe the four fundamental subspaces of each of the matrices \(A\), \(A+I\), and \(A+A^2\).
Click to see the solution
Key Concept: Compute the matrices explicitly, then determine their ranks and null spaces by row reduction.
Matrix \(A\):
Row reducing \(A\):
\[A = \begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}\]
This is in REF. Pivot columns: 2 and 3. Rank: \(r = 2\).
- \(\text{Col}(A) = \text{span}\{e_1, e_2\}\) where \(e_1 = (1,0,0)^T, e_2 = (0,1,0)^T\) (the original columns 2 and 3 of \(A\), which are \(e_1\) and \(e_2\)… wait: column 2 of \(A\) is \((1,0,0)^T\) and column 3 is \((0,1,0)^T\)). \(\dim\text{Col}(A) = 2\).
- \(\text{Row}(A) = \text{span}\{(0,1,0), (0,0,1)\}\). \(\dim\text{Row}(A) = 2\).
- \(\text{Nul}(A)\): \(x_1\) is free; special solution \((1,0,0)^T\). \(\dim\text{Nul}(A) = 1\).
- \(\text{Nul}(A^T)\): \(\dim = m - r = 3 - 2 = 1\). Solve \(A^T y = 0\): \(A^T = \begin{bmatrix}0&0&0\\1&0&0\\0&1&0\end{bmatrix}\). From row 3: \(y_2 = 0\); row 2: \(y_1 = 0\); \(y_3\) free. \(\text{Nul}(A^T) = \text{span}\{(0,0,1)^T\}\).
Matrix \(A + I\):
\[A + I = \begin{bmatrix}1&1&0\\0&1&1\\0&0&1\end{bmatrix}\]
This is upper triangular with 1s on the diagonal, so it is invertible (\(r = 3\)).
- \(\text{Col}(A+I) = \mathbb{R}^3\), \(\text{Row}(A+I) = \mathbb{R}^3\), \(\text{Nul}(A+I) = \{0\}\), \(\text{Nul}((A+I)^T) = \{0\}\).
Matrix \(A + A^2\):
\[A^2 = \begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}^2 = \begin{bmatrix}0&0&1\\0&0&0\\0&0&0\end{bmatrix}\]
\[A + A^2 = \begin{bmatrix}0&1&1\\0&0&1\\0&0&0\end{bmatrix}\]
Row reduce: pivots in columns 2 and 3. \(r = 2\).
- \(\text{Col}(A+A^2) = \text{span}\{(1,0,0)^T, (1,0,0)^T\}\)… Column 2 is \((1,0,0)^T\), column 3 is \((1,1,0)^T\). \(\text{Col}(A+A^2) = \text{span}\{(1,0,0)^T, (1,1,0)^T\}\), \(\dim = 2\).
- \(\text{Row}(A+A^2) = \text{span}\{(0,1,1),(0,0,1)\}\), \(\dim = 2\).
- \(\text{Nul}(A+A^2)\): free variable \(x_1\); special solution \((1,0,0)^T\). \(\dim = 1\).
- \(\text{Nul}((A+A^2)^T)\): \(\dim = 1\). \((A+A^2)^T = \begin{bmatrix}0&0&0\\1&0&0\\1&1&0\end{bmatrix}\). \(y_3\) free, \(y_1 = y_2 = 0\). \(\text{Nul} = \text{span}\{(0,0,1)^T\}\).
Answer: \(A\) and \(A+A^2\) both have rank 2, with null spaces \(\text{span}\{(1,0,0)^T\}\) and left null spaces \(\text{span}\{(0,0,1)^T\}\). \(A+I\) is invertible (rank 3), with trivial null spaces.
4.12. Is \(w\) in \(\text{Col}(A)\) or \(\text{Nul}(A)\)? (First Matrix) (Assignment 4, Task 4)
Let \[w = \begin{bmatrix}1\\1\\-1\\-3\end{bmatrix}, \quad A = \begin{bmatrix}7&6&-4&1\\-5&-1&0&-2\\9&-11&7&-3\\19&-9&7&1\end{bmatrix}\] Determine whether \(w \in \text{Col}(A)\), whether \(w \in \text{Nul}(A)\), both, or neither.
Click to see the solution
Key Concept: Check \(w \in \text{Nul}(A)\) by computing \(Aw\); check \(w \in \text{Col}(A)\) by solving \(Ax = w\).
Check \(w \in \text{Nul}(A)\): Compute \(Aw\):
\[Aw = \begin{bmatrix}7(1)+6(1)-4(-1)+1(-3)\\-5(1)-1(1)+0(-1)-2(-3)\\9(1)-11(1)+7(-1)-3(-3)\\19(1)-9(1)+7(-1)+1(-3)\end{bmatrix} = \begin{bmatrix}7+6+4-3\\-5-1+0+6\\9-11-7+9\\19-9-7-3\end{bmatrix} = \begin{bmatrix}14\\0\\0\\0\end{bmatrix}\]
Since \(Aw \neq \mathbf{0}\), \(w \notin \text{Nul}(A)\).
Check \(w \in \text{Col}(A)\): Solve \(Ax = w\) by row reducing \([A \mid w]\). If the system is consistent, \(w \in \text{Col}(A)\).
Row reduce: \[\left[\begin{array}{cccc|c}7&6&-4&1&1\\-5&-1&0&-2&1\\9&-11&7&-3&-1\\19&-9&7&1&-3\end{array}\right]\]
After Gaussian elimination (checking for consistency), the system is consistent (no pivot in the augmented column). Thus \(w \in \text{Col}(A)\).
Answer: \(w \in \text{Col}(A)\) but \(w \notin \text{Nul}(A)\).
4.13. Is \(w\) in \(\text{Col}(A)\) or \(\text{Nul}(A)\)? (Second Matrix) (Assignment 4, Task 5)
Let \[w = \begin{bmatrix}1\\2\\0\\1\end{bmatrix}, \quad A = \begin{bmatrix}-8&5&-2&0\\-5&2&1&-2\\10&-8&6&-3\\3&-2&1&0\end{bmatrix}\] Determine whether \(w \in \text{Col}(A)\), whether \(w \in \text{Nul}(A)\), both, or neither.
Click to see the solution
Check \(w \in \text{Nul}(A)\): Compute \(Aw\):
\[Aw = \begin{bmatrix}-8(1)+5(2)-2(0)+0(1)\\-5(1)+2(2)+1(0)-2(1)\\10(1)-8(2)+6(0)-3(1)\\3(1)-2(2)+1(0)+0(1)\end{bmatrix} = \begin{bmatrix}-8+10\\-5+4-2\\10-16-3\\3-4\end{bmatrix} = \begin{bmatrix}2\\-3\\-9\\-1\end{bmatrix}\]
Since \(Aw \neq \mathbf{0}\), \(w \notin \text{Nul}(A)\).
Check \(w \in \text{Col}(A)\): Row reduce \([A \mid w]\) to check consistency.
\[\left[\begin{array}{cccc|c}-8&5&-2&0&1\\-5&2&1&-2&2\\10&-8&6&-3&0\\3&-2&1&0&1\end{array}\right]\]
After row reduction, if a row \([0\;0\;0\;0\mid c]\) with \(c \neq 0\) appears, the system is inconsistent. Performing Gaussian elimination reveals an inconsistency, so \(w \notin \text{Col}(A)\).
Answer: \(w \notin \text{Col}(A)\) and \(w \notin \text{Nul}(A)\) — neither.
4.14. Subspaces in \(\mathbb{R}^n\) vs \(\mathbb{R}^m\) (Assignment 4, Task 6)
Let \(A\) be an \(m \times n\) matrix. Which of the following subspaces are in \(\mathbb{R}^n\) and which are in \(\mathbb{R}^m\)? \[\text{Row}(A),\ \text{Col}(A),\ \text{Nul}(A),\ \text{Row}(A^T),\ \text{Col}(A^T),\ \text{Nul}(A^T)\] How many distinct subspaces can appear in this list (in general)?
Click to see the solution
Key Concept: The ambient space of each subspace is determined by where the vectors live — input space \(\mathbb{R}^n\) or output space \(\mathbb{R}^m\).
- In \(\mathbb{R}^n\): \(\text{Row}(A)\), \(\text{Nul}(A)\), \(\text{Col}(A^T)\) — all subsets of the input space.
- In \(\mathbb{R}^m\): \(\text{Col}(A)\), \(\text{Nul}(A^T)\), \(\text{Row}(A^T)\) — all subsets of the output space.
Note: \(\text{Row}(A) = \text{Col}(A^T)\) and \(\text{Col}(A) = \text{Row}(A^T)\), so the six entries reduce to four distinct subspaces: \(\text{Row}(A)\), \(\text{Col}(A)\), \(\text{Nul}(A)\), \(\text{Nul}(A^T)\).
In general, there are 4 distinct subspaces (the four fundamental subspaces).
Answer: \(\text{Row}(A)\), \(\text{Nul}(A)\), \(\text{Col}(A^T)\) live in \(\mathbb{R}^n\); \(\text{Col}(A)\), \(\text{Nul}(A^T)\), \(\text{Row}(A^T)\) live in \(\mathbb{R}^m\). In general, 4 distinct subspaces.
4.15. Justifying the Rank-Nullity Identities (Assignment 4, Task 7)
Let \(A\) be an \(m \times n\) matrix. Justify the identities: \[\dim\text{Row}(A) + \dim\text{Nul}(A) = n, \qquad \dim\text{Col}(A) + \dim\text{Nul}(A^T) = m\]
Click to see the solution
Key Concept: These are the two statements of the Rank-Nullity Theorem applied to \(A\) and \(A^T\).
First identity: \(\dim\text{Row}(A) + \dim\text{Nul}(A) = n\).
The Rank-Nullity Theorem states that for any linear map \(T: \mathbb{R}^n \to \mathbb{R}^m\), \[\dim(\ker T) + \dim(\text{im}\, T) = n\]
For \(A\) as a linear map \(\mathbb{R}^n \to \mathbb{R}^m\): \(\ker = \text{Nul}(A)\), \(\text{im} = \text{Col}(A)\). But \(\dim\text{Col}(A) = r = \dim\text{Row}(A)\) (row rank equals column rank). So: \[r + \dim\text{Nul}(A) = n \implies \dim\text{Row}(A) + \dim\text{Nul}(A) = n\]
Second identity: \(\dim\text{Col}(A) + \dim\text{Nul}(A^T) = m\).
Apply the Rank-Nullity Theorem to \(A^T: \mathbb{R}^m \to \mathbb{R}^n\): \[\dim\text{Nul}(A^T) + \dim\text{Col}(A^T) = m\]
Since \(\dim\text{Col}(A^T) = \dim\text{Row}(A) = r = \dim\text{Col}(A)\): \[\dim\text{Col}(A) + \dim\text{Nul}(A^T) = m \quad \square\]
Answer: Both identities follow directly from the Rank-Nullity Theorem applied to \(A\) and \(A^T\), together with the fact that row rank equals column rank.
4.16. Equivalence: Full Column Space, Nul\((A^T) = \{0\}\), Consistency (Assignment 4, Task 8)
Let \(A\) be an \(m \times n\) matrix. Show that the following are equivalent:
\(Ax = b\) is consistent for every \(b \in \mathbb{R}^m\).
\(\text{Col}(A) = \mathbb{R}^m\).
\(\text{Nul}(A^T) = \{0\}\).
Click to see the solution
Key Concept: These three conditions all say the same thing: every output vector is achievable, i.e., \(A\) is surjective. We prove the equivalences in a cycle.
(a) \(\Rightarrow\) (b): If \(Ax = b\) is consistent for every \(b\), then every \(b \in \mathbb{R}^m\) is in \(\text{Col}(A)\). Thus \(\text{Col}(A) = \mathbb{R}^m\).
(b) \(\Rightarrow\) (a): If \(\text{Col}(A) = \mathbb{R}^m\), then every \(b \in \mathbb{R}^m\) is a linear combination of the columns of \(A\), i.e., \(Ax = b\) has a solution \(x\).
(b) \(\Rightarrow\) (c): If \(\text{Col}(A) = \mathbb{R}^m\), then \(\dim\text{Col}(A) = m\), so \(r = m\). By the dimension formula: \(\dim\text{Nul}(A^T) = m - r = 0\), so \(\text{Nul}(A^T) = \{0\}\).
(c) \(\Rightarrow\) (b): If \(\text{Nul}(A^T) = \{0\}\), then \(\dim\text{Nul}(A^T) = 0\), so \(r = m\), meaning \(\dim\text{Col}(A) = m\). Since \(\text{Col}(A) \subseteq \mathbb{R}^m\) and has dimension \(m\), we get \(\text{Col}(A) = \mathbb{R}^m\). \(\square\)
Answer: All three are equivalent; each follows from the others via the column space, rank, and Rank-Nullity Theorem.
4.17. Normal Vectors and Orthogonal Complement Construction (Assignment 4, Task 9)
Consider the planes in \(\mathbb{R}^3\): \[P_1 = \{x : 3x_1 - 4x_2 + x_3 = 0\}, \quad P_2 = \{x : 5x_1 - 10x_3 = 0\}\]
Find normal vectors \(n_1, n_2\) to \(P_1, P_2\).
Find bases for \(P_1\) and \(P_2\).
Construct a \(2 \times 3\) matrix \(A_1\) whose row space is \(P_1\). Show that \(\text{Nul}(A_1) = \text{span}\{n_1\}\).
Construct a \(3 \times 2\) matrix \(A_2\) whose column space is \(P_2\). Show that \(\text{Nul}(A_2^T) = \text{span}\{n_2\}\).
Click to see the solution
(a) Normal vectors:
- \(n_1 = (3, -4, 1)^T\) (coefficients of \(3x_1 - 4x_2 + x_3 = 0\))
- \(n_2 = (5, 0, -10)^T\) or simplified \((1, 0, -2)^T\) (coefficients of \(5x_1 - 10x_3 = 0\))
(b) Bases:
- Basis for \(P_1\): Free variables \(x_2, x_3\). Set \((x_2,x_3)=(1,0)\): \(x_1 = 4/3\), so \((4,3,0)^T\) (multiply by 3). Set \((x_2,x_3)=(0,1)\): \(x_1 = -1/3\), so \((-1,0,3)^T\) (multiply by 3). Basis: \(\{(4,3,0)^T, (-1,0,3)^T\}\).
- Basis for \(P_2\): \(5x_1 = 10x_3 \Rightarrow x_1 = 2x_3\). Free: \(x_2, x_3\). Set \((x_2,x_3)=(1,0)\): \((0,1,0)^T\). Set \((x_2,x_3)=(0,1)\): \((2,0,1)^T\). Basis: \(\{(0,1,0)^T, (2,0,1)^T\}\).
(c) Matrix \(A_1\) with row space \(P_1\):
Take a basis for \(P_1\) as rows: \[A_1 = \begin{bmatrix}4&3&0\\-1&0&3\end{bmatrix}\]
\(\text{Nul}(A_1)\): solve \(A_1 x = 0\). Row reduce: \(\begin{bmatrix}4&3&0\\-1&0&3\end{bmatrix}\). One free variable gives a 1-dimensional null space. Setting \(x_3 = 4\): \(-x_1 + 12 = 0 \Rightarrow x_1 = 12\), \(4(12) + 3x_2 = 0 \Rightarrow x_2 = -16\). So \(\mathbf{n} = (12,-16,4)^T = 4(3,-4,1)^T \parallel n_1\). Thus \(\text{Nul}(A_1) = \text{span}\{n_1\}\). ✓
(d) Matrix \(A_2\) with column space \(P_2\):
Take a basis for \(P_2\) as columns: \[A_2 = \begin{bmatrix}0&2\\1&0\\0&1\end{bmatrix}\]
$(A_2^T) = $ left null space of \(A_2\). \(\dim = 3 - 2 = 1\). Find \(y\) with \(A_2^T y = 0\):
\[A_2^T = \begin{bmatrix}0&1&0\\2&0&1\end{bmatrix}\]
From row 1: \(y_2 = 0\). From row 2: \(2y_1 + y_3 = 0 \Rightarrow y_3 = -2y_1\). Set \(y_1 = 1\): \((1,0,-2)^T \parallel n_2\). ✓
Answer: \(n_1 = (3,-4,1)^T\), \(n_2 = (1,0,-2)^T\). The constructions confirm the null space relationships.
4.18. Four Subspaces from Row-Equivalent Matrices (Assignment 4, Task 10)
The matrices below are row equivalent: \[A = \begin{bmatrix}2&-1&1&-6&8\\1&-2&-4&3&-2\\-7&8&10&3&-10\\4&-5&-7&0&4\end{bmatrix}, \quad B = \begin{bmatrix}1&-2&-4&3&-2\\0&3&9&-12&12\\0&0&0&0&0\\0&0&0&0&0\end{bmatrix}\]
Find \(\text{rank}(A)\) and \(\dim\text{Nul}(A)\) without further row reduction.
Find a basis for \(\text{Row}(A)\) and a basis for \(\text{Col}(A)\).
Find a basis for \(\text{Nul}(A)\).
Find a basis for \(\text{Nul}(A^T)\).
Click to see the solution
Key Concept: Row equivalent matrices have the same row space and null space. The echelon form \(B\) gives us everything we need.
(a) Rank and nullity:
\(B\) has 2 nonzero rows, so \(r = \text{rank}(A) = 2\). The matrix is \(4 \times 5\), so \(\dim\text{Nul}(A) = n - r = 5 - 2 = 3\).
(b) Bases for \(\text{Row}(A)\) and \(\text{Col}(A)\):
\(\text{Row}(A)\): The nonzero rows of \(B\): \[\text{Row}(A) = \text{span}\{(1,-2,-4,3,-2),\ (0,3,9,-12,12)\}\]
\(\text{Col}(A)\): Pivots of \(B\) are in columns 1 and 2. Use the corresponding columns of the original \(A\): \[\text{Col}(A) = \text{span}\left\{\begin{pmatrix}2\\1\\-7\\4\end{pmatrix}, \begin{pmatrix}-1\\-2\\8\\-5\end{pmatrix}\right\}\]
(c) Basis for \(\text{Nul}(A)\):
RREF from \(B\): \(\frac{1}{3}R_2\) and eliminate: \[\begin{bmatrix}1&0&2&-5&6\\0&1&3&-4&4\\0&0&0&0&0\\0&0&0&0&0\end{bmatrix}\]
Free variables: \(x_3, x_4, x_5\). Special solutions:
- \(x_3 = 1\): \(x_2 = -3\), \(x_1 = -2\). \(\mathbf{s}_1 = (-2,-3,1,0,0)^T\).
- \(x_4 = 1\): \(x_2 = 4\), \(x_1 = 5\). \(\mathbf{s}_2 = (5,4,0,1,0)^T\).
- \(x_5 = 1\): \(x_2 = -4\), \(x_1 = -6\). \(\mathbf{s}_3 = (-6,-4,0,0,1)^T\).
(d) Basis for \(\text{Nul}(A^T)\):
\(\dim\text{Nul}(A^T) = m - r = 4 - 2 = 2\). The last two rows of \(B\) are zero, meaning rows 3 and 4 of \(A\) are linear combinations of rows 1 and 2. To find \(\text{Nul}(A^T)\), row reduce \(A^T\). The zero rows of \(B\) correspond to the relations among rows of \(A\). From the row reduction of \(A\) to \(B\), the multipliers give vectors in \(\text{Nul}(A^T)\).
Answer: \(\text{rank}(A) = 2\), \(\dim\text{Nul}(A) = 3\). Row basis, column basis, and null basis are as computed above.
4.19. Four Subspaces and Orthogonality Verification (Assignment 4, Task 11)
Let \[A = \begin{bmatrix}5&-2&3\\-1&0&-1\\1&-2&-2\\-5&7&2\end{bmatrix}\] Find bases for \(\text{Col}(A)\), \(\text{Row}(A)\), \(\text{Nul}(A)\), and \(\text{Nul}(A^T)\), and verify the two orthogonality relations.
Click to see the solution
- Row reduce \(A\):
\[\begin{bmatrix}5&-2&3\\-1&0&-1\\1&-2&-2\\-5&7&2\end{bmatrix}\]
Swap \(R_1 \leftrightarrow R_3\): \[\begin{bmatrix}1&-2&-2\\-1&0&-1\\5&-2&3\\-5&7&2\end{bmatrix} \xrightarrow{R_2+R_1,\,R_3-5R_1,\,R_4+5R_1} \begin{bmatrix}1&-2&-2\\0&-2&-3\\0&8&13\\0&-3&-8\end{bmatrix}\]
\[\xrightarrow{R_3+4R_2,\,R_4-\frac{3}{2}R_2} \begin{bmatrix}1&-2&-2\\0&-2&-3\\0&0&1\\0&0&-\frac{7}{2}\end{bmatrix} \xrightarrow{R_4+\frac{7}{2}R_3} \begin{bmatrix}1&-2&-2\\0&-2&-3\\0&0&1\\0&0&0\end{bmatrix}\]
All 3 columns are pivot columns. \(r = 3 = n\), so \(\text{Nul}(A) = \{0\}\).
- Bases:
- \(\text{Col}(A)\): The 3 pivot columns of \(A\) form a basis: \[\left\{\begin{pmatrix}5\\-1\\1\\-5\end{pmatrix},\begin{pmatrix}-2\\0\\-2\\7\end{pmatrix},\begin{pmatrix}3\\-1\\-2\\2\end{pmatrix}\right\}\]
- \(\text{Row}(A)\): Nonzero rows of REF: \(\{(1,-2,-2),(0,-2,-3),(0,0,1)\}\) (as row vectors in \(\mathbb{R}^3\)).
- \(\text{Nul}(A) = \{0\}\) (no free variables).
- \(\text{Nul}(A^T)\): \(\dim = 4 - 3 = 1\). Solve \(A^T y = 0\) (i.e., row reduce \(A^T\)) to find 1 vector.
- Orthogonality: \(\text{Row}(A) \perp \text{Nul}(A)\) is trivial since \(\text{Nul}(A) = \{0\}\). \(\text{Col}(A) \perp \text{Nul}(A^T)\): verify the basis vector of \(\text{Nul}(A^T)\) is orthogonal to each column of \(A\).
Answer: \(r = 3\), \(\text{Nul}(A) = \{0\}\), \(\text{Nul}(A^T)\) is 1-dimensional. The two orthogonality relations hold by the Fundamental Theorem.
4.20. Four Subspaces of a \(2 \times 5\) Matrix (Assignment 4, Task 12)
Let \[A = \begin{bmatrix}4&5&-2&6&0\\1&1&0&1&0\end{bmatrix}\] Find bases for \(\text{Col}(A)\), \(\text{Row}(A)\), \(\text{Nul}(A)\), and \(\text{Nul}(A^T)\), and verify the two orthogonality relations.
Click to see the solution
- Row reduce: \[\begin{bmatrix}4&5&-2&6&0\\1&1&0&1&0\end{bmatrix} \xrightarrow{R_1\leftrightarrow R_2} \begin{bmatrix}1&1&0&1&0\\4&5&-2&6&0\end{bmatrix} \xrightarrow{R_2-4R_1} \begin{bmatrix}1&1&0&1&0\\0&1&-2&2&0\end{bmatrix} \xrightarrow{R_1-R_2} \begin{bmatrix}1&0&2&-1&0\\0&1&-2&2&0\end{bmatrix}\]
Pivots in columns 1 and 2. \(r = 2\). Free variables: \(x_3, x_4, x_5\).
- Bases:
- \(\text{Col}(A)\): Columns 1 and 2 of original \(A\): \(\left\{\begin{pmatrix}4\\1\end{pmatrix}, \begin{pmatrix}5\\1\end{pmatrix}\right\}\).
- \(\text{Row}(A)\): \(\{(1,0,2,-1,0),\ (0,1,-2,2,0)\}\).
- \(\text{Nul}(A)\): Set \(x_3 = 1\): \(x_1 = -2, x_2 = 2\). \(\mathbf{s}_1 = (-2,2,1,0,0)^T\). Set \(x_4 = 1\): \(x_1 = 1, x_2 = -2\). \(\mathbf{s}_2 = (1,-2,0,1,0)^T\). Set \(x_5 = 1\): \(x_1 = 0, x_2 = 0\). \(\mathbf{s}_3 = (0,0,0,0,1)^T\).
- \(\text{Nul}(A^T)\): \(\dim = 2 - 2 = 0\). \(\text{Nul}(A^T) = \{0\}\) (since \(r = m = 2\), \(A\) has full row rank).
- Orthogonality:
- \(\text{Row}(A) \perp \text{Nul}(A)\): Verify e.g. \((1,0,2,-1,0)\cdot(-2,2,1,0,0) = -2+0+2+0+0=0\) ✓.
- \(\text{Col}(A) \perp \text{Nul}(A^T)\): trivially satisfied since \(\text{Nul}(A^T) = \{0\}\).
Answer: \(r = 2\). Bases as given. Both orthogonality relations verified.
4.21. Dimensions of Four Subspaces Depending on Parameters (Assignment 4, Task 13)
Find the dimensions of the four fundamental subspaces of \(A\), depending on the parameters \(\alpha\) and \(\beta\): \[A = \begin{bmatrix}1&-1&-2\\1&\alpha&-1\\-1&1&\beta\end{bmatrix}\]
Click to see the solution
Key Concept: The rank depends on the parameters. Row reduce and determine when rows become linearly dependent.
Row reduce: \[\begin{bmatrix}1&-1&-2\\1&\alpha&-1\\-1&1&\beta\end{bmatrix} \xrightarrow{R_2-R_1,\,R_3+R_1} \begin{bmatrix}1&-1&-2\\0&\alpha+1&1\\0&0&\beta-2\end{bmatrix}\]
Case analysis:
- Generic case (\(\alpha \neq -1\) and \(\beta \neq 2\)): All 3 pivots exist. \(r = 3\). Dimensions: \(\dim\text{Col} = 3\), \(\dim\text{Row} = 3\), \(\dim\text{Nul} = 0\), \(\dim\text{Nul}(A^T) = 0\).
- Case \(\alpha = -1\), \(\beta \neq 2\): Row 2 becomes \((0,0,1)\). After swapping with row 3: pivots in columns 1 and 3. \(r = 2\). Dimensions: \(\dim\text{Col} = 2\), \(\dim\text{Row} = 2\), \(\dim\text{Nul} = 1\), \(\dim\text{Nul}(A^T) = 1\).
- Case \(\alpha \neq -1\), \(\beta = 2\): Row 3 becomes zero. \(r = 2\). Dimensions: \(\dim\text{Col} = 2\), \(\dim\text{Row} = 2\), \(\dim\text{Nul} = 1\), \(\dim\text{Nul}(A^T) = 1\).
- Case \(\alpha = -1\), \(\beta = 2\): Both conditions. Row 2 and row 3 become \((0,0,1)\) and \((0,0,0)\). \(r = 2\). (Or if row 2 becomes zero too — check: \(R_2 = (0,0,1)\), not zero. So \(r = 2\).)
Answer:
| Condition | \(r\) | \(\dim\text{Nul}\) | \(\dim\text{Nul}(A^T)\) |
|---|---|---|---|
| \(\alpha \neq -1\) and \(\beta \neq 2\) | 3 | 0 | 0 |
| \(\alpha = -1\) or \(\beta = 2\) (or both) | 2 | 1 | 1 |
4.22. Row Reduce to Echelon Form and Identify Free Variables (Tutorial 4, Task 1)
Reduce \(A\) and \(B\) to echelon form, find their ranks, and identify the free variables:
\[A = \begin{bmatrix}1&2&0&1\\0&1&1&0\\1&2&0&1\end{bmatrix}, \qquad B = \begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}\]
Click to see the solution
Matrix \(A\) (\(3 \times 4\)):
Row reduce: \[\begin{bmatrix}1&2&0&1\\0&1&1&0\\1&2&0&1\end{bmatrix} \xrightarrow{R_3-R_1} \begin{bmatrix}1&2&0&1\\0&1&1&0\\0&0&0&0\end{bmatrix}\]
Rank: \(r = 2\). Pivots in columns 1 and 2. Free variables: \(x_3\) and \(x_4\).
Matrix \(B\) (\(3 \times 3\)):
Row reduce: \[\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix} \xrightarrow{R_2-4R_1,\,R_3-7R_1} \begin{bmatrix}1&2&3\\0&-3&-6\\0&-6&-12\end{bmatrix} \xrightarrow{R_3-2R_2} \begin{bmatrix}1&2&3\\0&-3&-6\\0&0&0\end{bmatrix}\]
Rank: \(r = 2\). Pivots in columns 1 and 2. Free variable: \(x_3\).
Answer:
- \(A\): rank 2, free variables \(x_3, x_4\).
- \(B\): rank 2, free variable \(x_3\).
4.23. Column Space of an Invertible Matrix (Tutorial 4, Task 2)
If \(A\) is any \(8 \times 8\) invertible matrix, what is its column space? Why?
Click to see the solution
Key Concept: For an invertible matrix, the rank equals the number of rows (and columns), so the column space is the entire ambient space.
If \(A\) is \(8 \times 8\) and invertible, then \(\text{rank}(A) = 8\). Since \(\text{Col}(A) \subseteq \mathbb{R}^8\) and has dimension 8, we have: \[\text{Col}(A) = \mathbb{R}^8\]
Why? Because \(A\mathbf{x} = \mathbf{b}\) has a unique solution \(\mathbf{x} = A^{-1}\mathbf{b}\) for every \(\mathbf{b} \in \mathbb{R}^8\). This means every vector \(\mathbf{b}\) is in the column space.
Answer: \(\text{Col}(A) = \mathbb{R}^8\), because \(A\) has full rank 8 and its columns span all of \(\mathbb{R}^8\).
4.24. Solvability Condition and Complete Solution (Tutorial 4, Task 3)
Under what condition on \(b_1, b_2, b_3\) is the following system solvable? Include \(b\) as a fourth column in \([A \mid b]\). Find all solutions when that condition holds: \[\begin{aligned}x + 2y - 2z &= b_1 \\ 2x + 5y - 4z &= b_2 \\ 4x + 9y - 8z &= b_3\end{aligned}\]
Click to see the solution
Key Concept: Augmented matrix row reduction reveals the solvability condition as a constraint on \(b\), and the free variables give the null space.
- Row reduce \([A \mid b]\): \[\left[\begin{array}{ccc|c}1&2&-2&b_1\\2&5&-4&b_2\\4&9&-8&b_3\end{array}\right] \xrightarrow{R_2-2R_1,\,R_3-4R_1} \left[\begin{array}{ccc|c}1&2&-2&b_1\\0&1&0&b_2-2b_1\\0&1&0&b_3-4b_1\end{array}\right]\]
\[\xrightarrow{R_3-R_2} \left[\begin{array}{ccc|c}1&2&-2&b_1\\0&1&0&b_2-2b_1\\0&0&0&b_3-4b_1-(b_2-2b_1)\end{array}\right] = \left[\begin{array}{ccc|c}1&2&-2&b_1\\0&1&0&b_2-2b_1\\0&0&0&b_3-b_2-2b_1\end{array}\right]\]
Solvability condition: The last row requires \(b_3 - b_2 - 2b_1 = 0\), i.e.: \[\boxed{2b_1 + b_2 - b_3 = 0}\]
Complete solution (when the condition holds): Free variable \(z = t\).
- From row 2: \(y = b_2 - 2b_1\).
- From row 1: \(x = b_1 - 2y + 2z = b_1 - 2(b_2 - 2b_1) + 2t = 5b_1 - 2b_2 + 2t\).
\[\begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}5b_1-2b_2\\b_2-2b_1\\0\end{pmatrix} + t\begin{pmatrix}2\\0\\1\end{pmatrix}\]
Answer: Solvability condition: \(2b_1 + b_2 = b_3\). When satisfied, the complete solution is \((5b_1-2b_2,\; b_2-2b_1,\; 0)^T + t(2,0,1)^T\).
4.25. Choose \(q\) to Control Rank (Tutorial 4, Task 4)
Choose the number \(q\) so that (if possible) the ranks are (a) 1, (b) 2, (c) 3:
\[A = \begin{bmatrix}6&4&2\\-3&-2&-1\\9&6&q\end{bmatrix}, \qquad B = \begin{bmatrix}3&1&3\\q&2&q\end{bmatrix}\]
Click to see the solution
Matrix \(A\) (\(3 \times 3\)):
Row reduce: \[\begin{bmatrix}6&4&2\\-3&-2&-1\\9&6&q\end{bmatrix} \xrightarrow{R_2+\frac{1}{2}R_1,\,R_3-\frac{3}{2}R_1} \begin{bmatrix}6&4&2\\0&0&0\\0&0&q-3\end{bmatrix}\]
Rows 1 and 2 are proportional, so the first two rows contribute only one pivot. Rank depends on \(q\):
- (a) Rank 1: Need the third row to also be zero: \(q = 3\).
- (b) Rank 2: Need \(q \neq 3\) (so the third row gives a second pivot).
- (c) Rank 3: Impossible — rows 1 and 2 are proportional, maximum rank is 2.
Matrix \(B\) (\(2 \times 3\)):
\[\begin{bmatrix}3&1&3\\q&2&q\end{bmatrix}\]
Row reduce: \(R_2 - \frac{q}{3}R_1\): \[\begin{bmatrix}3&1&3\\0&2-\frac{q}{3}&0\end{bmatrix}\]
- (a) Rank 1: Need \(2 - q/3 = 0 \Rightarrow q = 6\).
- (b) Rank 2: Need \(q \neq 6\).
- (c) Rank 3: Impossible for a \(2 \times 3\) matrix (rank \(\leq \min(2,3) = 2\)).
Answer:
| Rank 1 | Rank 2 | Rank 3 | |
|---|---|---|---|
| \(A\) | \(q = 3\) | \(q \neq 3\) | Impossible |
| \(B\) | \(q = 6\) | \(q \neq 6\) | Impossible |
4.26. Gram-Schmidt Process and QR Decomposition (Tutorial 4, Task 5)
Apply the Gram-Schmidt process to \[\mathbf{a} = \begin{bmatrix}0\\0\\1\end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix}0\\1\\1\end{bmatrix}, \quad \mathbf{c} = \begin{bmatrix}1\\1\\1\end{bmatrix}\] and write the result in the form \(A = QR\).
Click to see the solution
Key Concept: Gram-Schmidt produces orthonormal columns \(Q = [q_1, q_2, q_3]\), and \(R = Q^T A\) is upper triangular.
- \(\mathbf{q}_1\): \(\mathbf{v}_1 = \mathbf{a} = (0,0,1)^T\), \(\|\mathbf{v}_1\| = 1\). \(\mathbf{q}_1 = (0,0,1)^T\).
- \(\mathbf{q}_2\): Subtract projection of \(\mathbf{b}\) onto \(\mathbf{q}_1\):
- \(\mathbf{b}\cdot\mathbf{q}_1 = 1\).
- \(\mathbf{v}_2 = \mathbf{b} - 1\cdot\mathbf{q}_1 = (0,1,1)^T - (0,0,1)^T = (0,1,0)^T\).
- \(\|\mathbf{v}_2\| = 1\). \(\mathbf{q}_2 = (0,1,0)^T\).
- \(\mathbf{q}_3\): Subtract projections of \(\mathbf{c}\) onto \(\mathbf{q}_1\) and \(\mathbf{q}_2\):
- \(\mathbf{c}\cdot\mathbf{q}_1 = 1\), \(\mathbf{c}\cdot\mathbf{q}_2 = 1\).
- \(\mathbf{v}_3 = \mathbf{c} - 1\cdot\mathbf{q}_1 - 1\cdot\mathbf{q}_2 = (1,1,1)^T - (0,0,1)^T - (0,1,0)^T = (1,0,0)^T\).
- \(\|\mathbf{v}_3\| = 1\). \(\mathbf{q}_3 = (1,0,0)^T\).
- \(Q\) and \(R\):
\[Q = \begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}\]
\[R = Q^T A = \begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}^T\begin{bmatrix}0&0&1\\0&1&1\\1&1&1\end{bmatrix} = \begin{bmatrix}1&1&1\\0&1&1\\0&0&1\end{bmatrix}\]
Verify: \(QR = \begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}\begin{bmatrix}1&1&1\\0&1&1\\0&0&1\end{bmatrix} = \begin{bmatrix}0&0&1\\0&1&1\\1&1&1\end{bmatrix} = A\) ✓
Answer: \(Q = \begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}\), \(R = \begin{bmatrix}1&1&1\\0&1&1\\0&0&1\end{bmatrix}\).
4.27. Symmetric \(LDL^T\) Factorization (Tutorial 4, Task 6)
Compute the symmetric \(LDL^T\) factorization of:
\[A_1 = \begin{bmatrix}1&3&5\\3&12&18\\5&18&30\end{bmatrix}, \qquad A_2 = \begin{bmatrix}a&b\\b&d\end{bmatrix}\]
Click to see the solution
Key Concept: For a symmetric matrix, \(A = LDL^T\) where \(L\) is unit lower triangular (1s on diagonal) and \(D\) is diagonal. The \(LDL^T\) factorization is the symmetric version of \(LU\).
Matrix \(A_1\):
Gaussian elimination, recording pivots and multipliers:
Pivot 1: \(d_1 = 1\). Multipliers: \(\ell_{21} = 3/1 = 3\), \(\ell_{31} = 5/1 = 5\).
After eliminating column 1: \[\begin{bmatrix}1&3&5\\0&12-9&18-15\\0&18-25&30-25\end{bmatrix} = \begin{bmatrix}1&3&5\\0&3&3\\0&-7&5\end{bmatrix}\]
Pivot 2: \(d_2 = 3\). Multiplier: \(\ell_{32} = -7/3\).
After eliminating column 2: \[\begin{bmatrix}1&3&5\\0&3&3\\0&0&5-(-7/3)(3)\end{bmatrix} = \begin{bmatrix}1&3&5\\0&3&3\\0&0&5+7\end{bmatrix} = \begin{bmatrix}1&3&5\\0&3&3\\0&0&12\end{bmatrix}\]
Pivot 3: \(d_3 = 12\).
Result: \[L = \begin{bmatrix}1&0&0\\3&1&0\\5&-7/3&1\end{bmatrix}, \quad D = \begin{bmatrix}1&0&0\\0&3&0\\0&0&12\end{bmatrix}\]
Matrix \(A_2\) (symbolic):
Pivot 1: \(d_1 = a\). Multiplier: \(\ell_{21} = b/a\). After elimination: \(d_2 = d - b^2/a = (ad - b^2)/a\).
\[L = \begin{bmatrix}1&0\\b/a&1\end{bmatrix}, \quad D = \begin{bmatrix}a&0\\0&\frac{ad-b^2}{a}\end{bmatrix}\]
Note: This requires \(a \neq 0\) and \(ad - b^2 \neq 0\) (i.e., \(A_2\) is nonsingular).
Answer:
- \(A_1 = LDL^T\) with \(L = \begin{bmatrix}1&0&0\\3&1&0\\5&-7/3&1\end{bmatrix}\), \(D = \text{diag}(1, 3, 12)\).
- \(A_2 = LDL^T\) with \(L = \begin{bmatrix}1&0\\b/a&1\end{bmatrix}\), \(D = \text{diag}(a, (ad-b^2)/a)\).
4.28. Subspace, Orthogonal Complement, and Decomposition (Tutorial 4, Task 7)
Find a basis for the subspace \(S \subseteq \mathbb{R}^4\) spanned by all solutions of \(x_1 + x_2 + x_3 - x_4 = 0\).
Find a basis for the orthogonal complement \(S^\perp\).
Find \(b_1 \in S\) and \(b_2 \in S^\perp\) so that \(b_1 + b_2 = b = (1, 1, 1, 1)^T\).
Click to see the solution
Key Concept: The subspace \(S\) is the null space of the \(1 \times 4\) matrix \(A = [1, 1, 1, -1]\). Its orthogonal complement is the row space of \(A\).
Basis for \(S\):
Solve \(x_1 + x_2 + x_3 - x_4 = 0\). Free variables: \(x_2, x_3, x_4\).
- \(x_2 = 1\): \(\mathbf{s}_1 = (-1, 1, 0, 0)^T\).
- \(x_3 = 1\): \(\mathbf{s}_2 = (-1, 0, 1, 0)^T\).
- \(x_4 = 1\): \(\mathbf{s}_3 = (1, 0, 0, 1)^T\).
\(\dim S = 3\).
(a) Basis for \(S^\perp\):
\(S^\perp = \text{Row}(A) = \text{span}\{(1, 1, 1, -1)^T\}\). \(\dim S^\perp = 1\).
(b) Decompose \(b = (1, 1, 1, 1)^T\):
Write \(b = b_1 + b_2\) where \(b_1 \in S\) and \(b_2 \in S^\perp\).
Since \(S^\perp = \text{span}\{n\}\) with \(n = (1,1,1,-1)^T\): \[b_2 = \text{proj}_{S^\perp}(b) = \frac{b \cdot n}{n \cdot n} n = \frac{1+1+1-1}{1+1+1+1}(1,1,1,-1)^T = \frac{2}{4}(1,1,1,-1)^T = \frac{1}{2}(1,1,1,-1)^T\]
\[b_1 = b - b_2 = (1,1,1,1)^T - (1/2,1/2,1/2,-1/2)^T = (1/2,1/2,1/2,3/2)^T\]
Verify: \(b_1 + b_2 = (1,1,1,1)^T\) ✓. Check \(b_1 \in S\): \(1/2+1/2+1/2-3/2 = 0\) ✓. Check \(b_2 \in S^\perp\): \(b_2 \parallel n\) ✓.
Answer:
- Basis for \(S\): \(\{(-1,1,0,0)^T, (-1,0,1,0)^T, (1,0,0,1)^T\}\).
- Basis for \(S^\perp\): \(\{(1,1,1,-1)^T\}\).
- \(b_1 = (1/2, 1/2, 1/2, 3/2)^T \in S\), \(b_2 = (1/2, 1/2, 1/2, -1/2)^T \in S^\perp\).
4.29. Construct a System with a Given Complete Solution (Test I Recap, Task 1)
I. Find a \(2 \times 3\) system \(Ax = b\) whose complete solution is \[x = \begin{bmatrix}1\\2\\0\end{bmatrix} + w\begin{bmatrix}1\\3\\1\end{bmatrix}\]
- Find a \(3 \times 3\) system with solutions exactly when \(b_1 + b_2 = b_3\).
Click to see the solution
Key Concept: The complete solution consists of a particular solution plus the null space. We need to reverse-engineer the matrix and right-hand side from this solution.
Part I: Find a \(2 \times 3\) system
Identify the particular solution and null space direction:
- Particular solution: \(\mathbf{x}_p = (1,2,0)^T\)
- Null space direction: \(\mathbf{x}_h = (1,3,1)^T\) (free variable \(w\))
Construct the matrix \(A\) such that \(A\mathbf{x}_h = \mathbf{0}\):
\[A = \begin{bmatrix}1&0&-1\\0&1&-3\end{bmatrix}\]
Verify: \(A\mathbf{x}_h = (1-1, 3-3)^T = (0,0)^T\) ✓
Find \(\mathbf{b} = A\mathbf{x}_p\):
\[\mathbf{b} = \begin{bmatrix}1&0&-1\\0&1&-3\end{bmatrix}\begin{bmatrix}1\\2\\0\end{bmatrix} = \begin{bmatrix}1\\2\end{bmatrix}\]
Part II: Find a \(3 \times 3\) system
Add a third row that is the sum of the first two: \[A = \begin{bmatrix}1&0&-1\\0&1&-3\\1&1&-4\end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix}b_1\\b_2\\b_1+b_2\end{bmatrix}\]
Answer:
- Part I: \(\begin{bmatrix}1&0&-1\\0&1&-3\end{bmatrix}x = \begin{bmatrix}1\\2\end{bmatrix}\)
- Part II: \(\begin{bmatrix}1&0&-1\\0&1&-3\\1&1&-4\end{bmatrix}x = \begin{bmatrix}b_1\\b_2\\b_1+b_2\end{bmatrix}\)
4.30. Find Complete Solutions of SLAEs (Test I Recap, Task 2)
Find the complete solutions of the following systems.
\[\begin{bmatrix}1&3&3\\2&6&9\\-1&-3&3\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix} = \begin{bmatrix}1\\5\\5\end{bmatrix}\]
\[\begin{bmatrix}1&3&1&2\\2&6&4&8\\0&0&2&4\end{bmatrix}\begin{bmatrix}x\\y\\z\\t\end{bmatrix} = \begin{bmatrix}1\\3\\1\end{bmatrix}\]
Click to see the solution
Key Concept: The complete solution is the sum of a particular solution and the general homogeneous solution.
Part (a):
Row reduce the augmented matrix:
\[\left[\begin{array}{ccc|c}1&3&3&1\\2&6&9&5\\-1&-3&3&5\end{array}\right] \xrightarrow{R_2-2R_1,\,R_3+R_1} \left[\begin{array}{ccc|c}1&3&3&1\\0&0&3&3\\0&0&6&6\end{array}\right] \xrightarrow{R_3-2R_2} \left[\begin{array}{ccc|c}1&3&3&1\\0&0&3&3\\0&0&0&0\end{array}\right]\]
Back-substitute: From row 2: \(z = 1\). From row 1: \(x + 3y = -2\). Free variable \(y = \alpha\).
Complete solution: \[\begin{pmatrix}x\\y\\z\end{pmatrix} = \begin{pmatrix}-2\\0\\1\end{pmatrix} + \alpha\begin{pmatrix}-3\\1\\0\end{pmatrix}\]
Part (b):
Row reduce: \[\left[\begin{array}{cccc|c}1&3&1&2&1\\2&6&4&8&3\\0&0&2&4&1\end{array}\right] \xrightarrow{R_2-2R_1} \left[\begin{array}{cccc|c}1&3&1&2&1\\0&0&2&4&1\\0&0&2&4&1\end{array}\right] \xrightarrow{R_3-R_2} \left[\begin{array}{cccc|c}1&3&1&2&1\\0&0&2&4&1\\0&0&0&0&0\end{array}\right]\] \[\xrightarrow{R_1-\frac{1}{2}R_2} \left[\begin{array}{cccc|c}1&3&0&0&\frac{1}{2}\\0&0&1&2&\frac{1}{2}\\0&0&0&0&0\end{array}\right]\]
Free variables \(y = \alpha\), \(t = \beta\); pivot variables \(x = \tfrac{1}{2} - 3\alpha\), \(z = \tfrac{1}{2} - 2\beta\).
Complete solution: \[\begin{pmatrix}x\\y\\z\\t\end{pmatrix} = \begin{pmatrix}1/2\\0\\1/2\\0\end{pmatrix} + \alpha\begin{pmatrix}-3\\1\\0\\0\end{pmatrix} + \beta\begin{pmatrix}0\\0\\-2\\1\end{pmatrix}\]
Answer:
- (a): \((-2,0,1)^T + \alpha(-3,1,0)^T\)
- (b): \((1/2,0,1/2,0)^T + \alpha(-3,1,0,0)^T + \beta(0,0,-2,1)^T\)
4.31. Orthogonal Complement of Row Space (Test I Recap, Task 3)
Find a basis for the orthogonal complement of the row space of \[A = \begin{bmatrix}1&0&2\\1&1&4\end{bmatrix}\]
Split \(\mathbf{x} = (3,3,3)^T\) into a row space component \(\mathbf{x}_r\) and a null space component \(\mathbf{x}_n\).
Click to see the solution
Key Concept: The orthogonal complement of the row space is the null space (Fundamental Theorem). For part (b), decompose \(\mathbf{x}\) as a combination of row space and null space vectors.
Part (a):
Row reduce \(A\): \[\begin{bmatrix}1&0&2\\1&1&4\end{bmatrix} \xrightarrow{R_2-R_1} \begin{bmatrix}1&0&2\\0&1&2\end{bmatrix}\]
Free variable \(z\): \(x = -2z\), \(y = -2z\). Basis for \(\text{Nul}(A)\): \(\left\{\begin{pmatrix}-2\\-2\\1\end{pmatrix}\right\}\).
Part (b):
Basis for \(\text{Row}(A)\): \(\mathbf{r}_1 = (1,0,2)^T\), \(\mathbf{r}_2 = (0,1,2)^T\). Write \(\mathbf{x} = \alpha\mathbf{r}_1 + \beta\mathbf{r}_2 + \gamma(-2,-2,1)^T\):
\[\begin{cases}\alpha - 2\gamma = 3\\\beta - 2\gamma = 3\\2\alpha + 2\beta + \gamma = 3\end{cases}\]
From rows 1 and 2: \(\alpha = \beta = 3 + 2\gamma\). Substitute into row 3: \(2(3+2\gamma) + 2(3+2\gamma) + \gamma = 3 \Rightarrow 12 + 9\gamma = 3 \Rightarrow \gamma = -1\).
Thus \(\alpha = \beta = 1\), and:
- \(\mathbf{x}_r = 1\cdot(1,0,2)^T + 1\cdot(0,1,2)^T = (1,1,4)^T\)
- \(\mathbf{x}_n = -1\cdot(-2,-2,1)^T = (2,2,-1)^T\)
Verify: \((1,1,4) + (2,2,-1) = (3,3,3)\) ✓. \(A\mathbf{x}_n = (0,0)^T\) ✓.
Answer:
- (a): Basis for \(\text{Row}(A)^\perp\): \(\left\{\begin{pmatrix}-2\\-2\\1\end{pmatrix}\right\}\)
- (b): \(\mathbf{x}_r = (1,1,4)^T\), \(\mathbf{x}_n = (2,2,-1)^T\)
4.32. Why These Orthogonal Statements Are False (Test I Recap, Task 4)
Why are the following statements false?
- Statement 1: If \(V\) is orthogonal to \(W\), then \(V^\perp\) is orthogonal to \(W^\perp\).
- Statement 2: If \(V\) is orthogonal to \(W\), and \(W\) is orthogonal to \(Z\), then \(V^\perp\) is orthogonal to \(Z\).
Click to see the solution
Key Concept: These statements claim orthogonality relations that don’t hold. We disprove them with counterexamples in \(\mathbb{R}^3\).
Statement 1:
Let \(V = \text{span}\{(1,0,0)^T\}\), \(W = \text{span}\{(0,1,0)^T\}\). Then \(V \perp W\).
\(V^\perp = \text{span}\{(0,1,0)^T, (0,0,1)^T\}\), \(W^\perp = \text{span}\{(1,0,0)^T, (0,0,1)^T\}\).
Both complements contain \((0,0,1)^T\), and \((0,0,1)^T \cdot (0,0,1)^T = 1 \neq 0\). So \(V^\perp \not\perp W^\perp\). False.
Statement 2:
Let \(V = \text{span}\{(1,0,0)^T\}\), \(W = \text{span}\{(0,1,0)^T\}\), \(Z = \text{span}\{(1,0,1)^T\}\).
- \(V \perp W\): \((1,0,0)\cdot(0,1,0) = 0\) ✓
- \(W \perp Z\): \((0,1,0)\cdot(1,0,1) = 0\) ✓
- \(V^\perp = \text{span}\{(0,1,0)^T,(0,0,1)^T\}\); \((0,0,1)^T \in V^\perp\).
- \((0,0,1)\cdot(1,0,1) = 1 \neq 0\). So \(V^\perp \not\perp Z\). False.
Answer: Both statements are false by the counterexamples above.
4.33. Gram-Schmidt Orthogonalization (Test I Recap, Task 5)
Using the following set of non-orthogonal vectors, determine a corresponding set of orthonormal vectors using the Gram-Schmidt process: \[\mathbf{a} = \begin{bmatrix}1\\1\\0\end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix}1\\0\\1\end{bmatrix}, \quad \mathbf{c} = \begin{bmatrix}0\\1\\1\end{bmatrix}\]
Click to see the solution
Key Concept: The Gram-Schmidt process creates orthogonal vectors by subtracting projections onto previously computed vectors, then normalizes to get an orthonormal set.
First vector: \[\mathbf{v}_1 = \mathbf{a} = \begin{pmatrix}1\\1\\0\end{pmatrix}, \quad \|\mathbf{v}_1\| = \sqrt{2}, \quad \mathbf{q}_1 = \frac{1}{\sqrt{2}}\begin{pmatrix}1\\1\\0\end{pmatrix}\]
Second vector: Subtract projection of \(\mathbf{b}\) onto \(\mathbf{v}_1\):
- \(\mathbf{b}\cdot\mathbf{v}_1 = 1\), \(\|\mathbf{v}_1\|^2 = 2\).
- \(\mathbf{v}_2 = \mathbf{b} - \frac{1}{2}\mathbf{v}_1 = (1,0,1)^T - (1/2,1/2,0)^T = (1/2,-1/2,1)^T\).
- \(\|\mathbf{v}_2\|^2 = 1/4+1/4+1 = 3/2\). \(\mathbf{q}_2 = \frac{2}{\sqrt{6}}(1/2,-1/2,1)^T = (1/\sqrt{6},-1/\sqrt{6},2/\sqrt{6})^T\).
Third vector: Subtract projections of \(\mathbf{c}\) onto \(\mathbf{v}_1\) and \(\mathbf{v}_2\):
- \(\mathbf{c}\cdot\mathbf{v}_1 = 1\); \(\text{proj}_{\mathbf{v}_1}\mathbf{c} = (1/2,1/2,0)^T\).
- \(\mathbf{c}\cdot\mathbf{v}_2 = 0(1/2)+1(-1/2)+1(1) = 1/2\); \(\text{proj}_{\mathbf{v}_2}\mathbf{c} = \frac{1/2}{3/2}(1/2,-1/2,1)^T = (1/6,-1/6,1/3)^T\).
- \(\mathbf{v}_3 = \mathbf{c} - (1/2,1/2,0)^T - (1/6,-1/6,1/3)^T = (0,1,1)^T - (2/3,1/3,1/3)^T = (-2/3,2/3,2/3)^T\).
- \(\|\mathbf{v}_3\| = \sqrt{4/9+4/9+4/9} = 2/\sqrt{3}\). \(\mathbf{q}_3 = \frac{\sqrt{3}}{2}(-2/3,2/3,2/3)^T = (-1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})^T\).
Answer: \[\mathbf{q}_1 = \frac{1}{\sqrt{2}}\begin{pmatrix}1\\1\\0\end{pmatrix}, \quad \mathbf{q}_2 = \frac{1}{\sqrt{6}}\begin{pmatrix}1\\-1\\2\end{pmatrix}, \quad \mathbf{q}_3 = \frac{1}{\sqrt{3}}\begin{pmatrix}-1\\1\\1\end{pmatrix}\]
4.34. Solve Using LU-Decomposition (Test I Recap, Task 6)
Solve the matrix equation using LU-decomposition and forward/backward substitution: \[\begin{bmatrix}2&1&1\\4&1&0\\-2&2&1\end{bmatrix}\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} = \begin{bmatrix}3\\3\\-2\end{bmatrix}\]
Click to see the solution
Key Concept: \(A = LU\) where \(L\) is unit lower triangular and \(U\) is upper triangular. Solve \(L\mathbf{y} = \mathbf{b}\) (forward), then \(U\mathbf{x} = \mathbf{y}\) (backward).
- LU decomposition: Gaussian elimination on \(A\):
- \(m_{21} = 4/2 = 2\), \(m_{31} = -2/2 = -1\). After step 1: \(\begin{bmatrix}2&1&1\\0&-1&-2\\0&3&2\end{bmatrix}\).
- \(m_{32} = 3/(-1) = -3\). After step 2: \(U = \begin{bmatrix}2&1&1\\0&-1&-2\\0&0&-4\end{bmatrix}\).
- \(L = \begin{bmatrix}1&0&0\\2&1&0\\-1&-3&1\end{bmatrix}\).
- Forward substitution \(L\mathbf{y} = \mathbf{b}\):
- \(y_1 = 3\)
- \(2(3) + y_2 = 3 \Rightarrow y_2 = -3\)
- \(-3 - 3(-3) + y_3 = -2 \Rightarrow y_3 = -8\)
- Backward substitution \(U\mathbf{x} = \mathbf{y}\):
- \(-4x_3 = -8 \Rightarrow x_3 = 2\)
- \(-x_2 - 2(2) = -3 \Rightarrow x_2 = -1\)
- \(2x_1 - 1 + 2 = 3 \Rightarrow x_1 = 1\)
Answer: \(\mathbf{x} = (1,-1,2)^T\).
4.35. Check Symmetric Positive Definiteness (Test I Recap, Task 7)
Check whether the following matrices are symmetric positive definite: \[A = \begin{bmatrix}2&-1&0\\-1&2&-1\\0&-1&2\end{bmatrix}, \quad B = \begin{bmatrix}5&2&1\\2&4&1\\1&1&3\end{bmatrix}, \quad C = \begin{bmatrix}4&0&1\\0&3&0\\1&0&2\end{bmatrix}\]
Click to see the solution
Key Concept: A symmetric matrix is positive definite iff all its leading principal minors are positive (Sylvester’s criterion).
Matrix \(A\):
- \(D_1 = 2 > 0\) ✓
- \(D_2 = \begin{vmatrix}2&-1\\-1&2\end{vmatrix} = 4 - 1 = 3 > 0\) ✓
- \(D_3 = 2(3) - (-1)(-2-0) = 6 - 2 = 4 > 0\) ✓
\(A\) is positive definite.
Matrix \(B\):
- \(D_1 = 5 > 0\) ✓
- \(D_2 = 20 - 4 = 16 > 0\) ✓
- \(D_3 = 5(12-1) - 2(6-1) + 1(2-4) = 55 - 10 - 2 = 43 > 0\) ✓
\(B\) is positive definite.
Matrix \(C\):
- \(D_1 = 4 > 0\) ✓
- \(D_2 = 12 > 0\) ✓
- \(D_3 = 3(8-1) = 21 > 0\) (expanding along row 2) ✓
\(C\) is positive definite.
Answer: All three matrices \(A\), \(B\), \(C\) are symmetric positive definite.